WS #152 a Competition Rights Approach to Digital Markets

WS #152 a Competition Rights Approach to Digital Markets

Session at a glance

Summary

This workshop at the Internet Governance Forum explored the intersection between competition law and human rights in digital markets, focusing on the European Union’s Digital Markets Act (DMA) and its implications for the Global South. Bruno Carballa from the European Commission explained that the DMA, implemented in 2022-2023, targets “gatekeeper” platforms like Google, Meta, Apple, Amazon, Microsoft, and ByteDance that meet specific size and market position criteria. The regulation imposes obligations such as allowing businesses to operate outside platforms, preventing data combination without consent, ensuring interoperability, and prohibiting self-preferencing of services.


Camila Leite Contri from Brazil’s IDEC argued that economic concentration directly impacts human rights, particularly freedom of expression, citing examples like zero-rating practices that limit platform choices for lower-income users and Google’s interference in Brazil’s fake news bill debate. She emphasized the need to connect competition law with human rights discourse, noting that monopolistic power translates into political influence that affects democratic participation. Hannah Taieb from Speedio discussed how market concentration in digital platforms undermines media diversity and editorial authority, leading to filter bubbles and misinformation spread through opaque algorithms.


The panelists addressed questions about creating alternatives to dominant platforms, particularly in the Global South, suggesting solutions like public digital infrastructure (citing Brazil’s PIX payment system), open-source alternatives, and bolder regulatory approaches. They emphasized the importance of interoperability, algorithm transparency, and unbundling of platform services. The discussion concluded with calls for more interdisciplinary dialogue between human rights advocates and competition law experts to develop comprehensive approaches to platform regulation that protect both economic competition and fundamental rights.


Keypoints

## Major Discussion Points:


– **Digital Markets Act (DMA) and Economic Regulation**: The European Union’s DMA targets “gatekeeper” platforms (Google, Apple, Meta, etc.) with specific obligations to prevent abuse of market power, including allowing third-party payment systems, data portability, app uninstallation rights, and preventing self-preferencing. While primarily economic in focus, these regulations have indirect human rights implications.


– **Connection Between Economic Concentration and Human Rights**: Panelists explored how monopolistic control of digital platforms directly impacts fundamental rights like freedom of expression, access to information, and democratic participation. Examples included Brazil’s zero-rating practices that limit platform choice for lower-income users and Google’s interference in political discourse during Brazil’s “fake news bill” debate.


– **Global South Challenges and Alternatives**: Discussion of how developing countries like Brazil face infrastructure limitations and dependency on Big Tech platforms, with exploration of potential solutions including public digital infrastructure (like Brazil’s PIX payment system), open-source alternatives, and stronger competition authority powers to consider human rights impacts.


– **Business Models and Ethical Technology**: Examination of how current advertising-dependent models contribute to harmful content amplification and filter bubbles, with proposals for more ethical algorithms, transparent recommendation systems, and alternative monetization models that don’t rely solely on data exploitation and targeted advertising.


– **Regulatory Integration and Cross-Disciplinary Collaboration**: Strong emphasis on the need to bridge competition law, human rights advocacy, and technology policy, with calls for bolder regulatory approaches that consider human rights impacts in antitrust decisions and greater cooperation between different regulatory bodies.


## Overall Purpose:


The discussion aimed to explore the intersection between competition law/antitrust regulation and human rights protection in digital markets, specifically examining how economic concentration of power among Big Tech platforms affects fundamental rights and what regulatory and business model alternatives could better protect both competition and human rights.


## Overall Tone:


The discussion maintained a collaborative and constructive tone throughout, with participants expressing genuine enthusiasm for cross-disciplinary dialogue. The tone was academic yet accessible, with speakers acknowledging their different professional backgrounds while finding common ground. There was an underlying sense of urgency about addressing Big Tech dominance, but the approach remained solution-oriented rather than purely critical. The atmosphere became increasingly optimistic as panelists and audience members, particularly from Brazil’s youth delegation, engaged with concrete examples and potential pathways forward.


Speakers

**Speakers from the provided list:**


– **Raquel da Cruz Lima** – Human rights lawyer from Brazil, works at Article 19 Brazil and South America (human rights organization dedicated to protection of freedom of expression)


– **Bruno Carballa SmichoWSki** – Research officer at the European Commission’s Joint Research Centre, economist working in the digital markets research team


– **Camila Leite Contri** – Representative of IDEC (Institute for Consumer Defense) in Brazil, has background in competition law


– **Hannah Taieb** – Leading business development for Speedio (now part of Mediagenix), specializes in commercialization of recommendation algorithms, has background in consultancy for public institutions on ethical algorithm implementation


– **Jacques Peglinger** – From business side, teaches digital regulation at a Dutch university


– **Audience** – Multiple audience members including Laura (youth program participant from Brazil), João (youth delegation from Brazil), and Beatriz (assistant professor in law at University of Sussex, UK, teaches Internet law regulation and platform regulation)


**Additional speakers:**


– **Juan David Gutiérrez** – Mentioned as joining online but did not participate in the recorded discussion


Full session report

# Workshop Report: Competition Law and Human Rights in Digital Markets


## Introduction and Context


This workshop at the Internet Governance Forum brought together experts to explore the intersection between competition law and human rights in digital markets, with particular focus on the European Union’s Digital Markets Act (DMA) and implications for the Global South. The panel included Bruno Carballa Smichowski, a research officer at the European Commission’s Joint Research Centre (speaking in personal capacity); Camila Leite Contri from Brazil’s Institute for Consumer Defence (IDEC); Hannah Taieb from Speedio (now part of Mediagenix), specializing in recommendation algorithms for entertainment; and Raquel da Cruz Lima, a human rights lawyer from Article 19 Brazil.


The discussion featured active audience participation, including questions from Brazilian youth delegation members Laura and João, academic expert Beatriz specializing in internet law regulation, and Jacques Peglinger, who teaches digital regulation at a Dutch university.


## The Digital Markets Act: Framework and Implementation


Bruno Carballa Smichowski provided insights into the DMA, emphasizing that his views were personal and not official European Commission positions. The DMA entered into force in November 2022 and became applicable in May 2023, targeting “gatekeeper” platforms that meet specific criteria including revenues or market capitalization above 7.5 billion euros.


Six companies have been designated as gatekeepers: Google (Alphabet), Meta, Apple, Amazon, Microsoft, and ByteDance. These platforms face obligations including allowing businesses to operate outside their platforms, enabling data portability, permitting users to uninstall pre-installed applications, ensuring interoperability with third-party services, and prohibiting self-preferencing.


Enforcement has begun with cases against Apple and Meta. Bruno mentioned fines of 500 million for Apple and 200 million for Meta, though noted these figures were preliminary. The DMA coordinates with the Digital Services Act (DSA) through shared procedures while maintaining distinct objectives.


Bruno acknowledged questions about whether generative AI should be included in DMA categories, indicating this remains an evolving area of regulatory consideration.


## Economic Concentration and Human Rights: The Brazilian Perspective


Camila Leite Contri presented research on how digital platforms function as gatekeepers of human rights, with economic concentration directly impacting fundamental freedoms. Her research on how lower socioeconomic classes use the internet in Brazil revealed concerning patterns.


She highlighted zero-rating practices where users with prepaid mobile data plans can access certain platforms—primarily WhatsApp, Facebook, and TikTok—without consuming data allowances. For citizens with limited data budgets (typically four gigabytes monthly), this artificially constrains platform choices, directly impacting freedom of expression and access to diverse information.


Camila provided a striking example of Google’s political interference during Brazil’s “fake news bill” debate. During the crucial voting week, Google displayed messages on its main website asking “How can the bill, the fake news bill, worsen your internet?” Additionally, searches about the fake news bill returned sponsored links promoting opposition to the “censorship bill.”


She argued for integrating human rights considerations directly into competition law analysis rather than treating them as separate domains, calling for competition authorities to adopt bolder approaches including potential company breakups.


## Media Diversity and Algorithmic Transparency


Hannah Taieb focused on how market concentration undermines media diversity, highlighting the shift from information consumption within defined editorial contexts to algorithmic feeds where logic is invisible and data collection intrusive. She noted that the creator economy and influencer culture increasingly dominate over trained journalism.


Hannah demonstrated that technical solutions exist for ethical approaches to content recommendation that maintain user experience while respecting privacy and providing transparency. She advocated for algorithm pluralism and interoperability as essential for diverse information ecosystems, emphasizing that users should understand recommendation logic and have choices about content curation.


## Public Alternatives and Digital Infrastructure


Bruno highlighted Brazil’s PIX payment system as an exemplary model of public digital infrastructure challenging private platform monopolies. PIX succeeded through public investment combined with mandatory interoperability requirements, forcing all financial institutions to integrate with the system.


This example provided evidence that alternatives to dominant platforms can achieve widespread adoption when properly designed and regulated. Key success factors included government backing, universal interoperability requirements, and user convenience matching existing alternatives.


The discussion extended to other potential public infrastructure areas, including cloud services and social media alternatives, with Bruno suggesting open-source solutions combined with public procurement requirements could promote alternatives across various sectors.


## Cross-Disciplinary Dialogue and Coordination


A recurring theme was the artificial separation between regulatory domains. Camila noted feeling isolated discussing human rights in competition law circles and equally isolated discussing market concentration in human rights spaces.


Raquel reinforced this from a constitutional perspective, arguing that states have duties to consider human rights implications in all regulatory decisions, including competition matters. This provides legal foundation for integrating human rights analysis into economic regulation.


The discussion explored practical coordination mechanisms, including shared procedures between regulatory frameworks and empowering civil society organizations to participate more effectively in market-oriented discussions.


## Global South Perspectives and Challenges


Audience members from Brazil’s youth delegation raised questions about how countries with limited technological infrastructure can develop competitive alternatives. Laura asked about Global South protagonism in platform regulation, while João questioned user incentives for switching platforms despite network effects.


The discussion revealed both challenges and opportunities. Infrastructure limitations create barriers, but examples like PIX demonstrate that strategic public investment with smart regulation can create successful alternatives in developing economies.


Panelists suggested focusing on digital public infrastructure development, supporting open-source alternatives through public procurement, requiring interoperability to reduce platform lock-in, and developing regulatory approaches considering human rights impacts in competition decisions.


## Regulatory Coordination and Implementation


Beatriz, an academic expert in internet law regulation, asked about coordination between different regulatory frameworks. The discussion revealed both opportunities and challenges in aligning competition law, data protection, and human rights approaches.


Bruno discussed coordination between DMA and DSA implementation, while panelists acknowledged that different jurisdictions may need adapted solutions based on legal systems, institutional capacities, and political contexts.


The conversation also addressed Brazil’s “revolving door” problem in cloud services, where officials move between regulatory positions and private companies, potentially creating conflicts of interest in infrastructure decisions.


## Key Areas of Agreement and Tension


Panelists demonstrated consensus that economic concentration has direct human rights implications and that interoperability is crucial for breaking platform monopolies. However, disagreements emerged regarding whether regulatory approaches should maintain primarily economic objectives with indirect human rights benefits, or directly integrate human rights considerations into competition law.


There were also differences regarding intervention intensity, with some emphasizing targeted approaches like the DMA while others advocated for more aggressive interventions including company breakups.


## Questions and Future Directions


The discussion identified ongoing challenges including network effects that maintain user loyalty to dominant platforms despite alternatives, infrastructure development needs in Global South countries, and sustainable funding mechanisms for ethical technology alternatives.


An audience question about generative AI regulation highlighted emerging challenges as technology evolves beyond current regulatory frameworks.


## Conclusion


The workshop successfully demonstrated the value of interdisciplinary dialogue in addressing platform governance challenges. By integrating perspectives from regulation, civil society, business, and academia, the discussion revealed interconnections between economic concentration and human rights while identifying potential solutions.


Concrete examples—from Brazil’s zero-rating practices to Google’s political interference to PIX’s success—grounded theoretical concepts in practical experience. The consensus between different stakeholder perspectives suggests maturing understanding of these challenges that could facilitate more integrated policy approaches.


Raquel concluded by referencing Article 19’s policy paper “Taming the Big Tech,” emphasizing the continued need for coordinated approaches addressing both economic competition and fundamental rights in digital markets.


Session transcript

Raquel da Cruz Lima: Hi, hello, everyone. It’s a great pleasure to welcome you all to this workshop called Competition Rights Approach Digital Markets. Before we start, I’d like to invite anyone who would like to be with us here at the roundtable. You would have mics, so it makes it easier to make questions in the end of the session. So please be free to sit here with us like André. I would like to thank especially our panelists for being here, first Camila and Hannah, who are here in person, and also Bruno and Juan David, who will be joining us online. Before I give the floor to our panelists, let me introduce myself. My name is Raquel da Cruz Lima. I’m a human rights lawyer from Brazil, and I work at Article 19 Brazil and South America, a human rights organization dedicated to the protection of freedom of expression. Under the perspective of freedom of expression, diversity and pluralism are vital. For that reason, human rights bodies, such as the Inter-American Court of Human Rights, have long stated that the means by which freedom of expression is exercised are owned by monopolies, then the circulation of ideas and opinions is limited. Therefore, in order to protect freedom of expression and access to information, states have a duty to prevent excessive concentration. the objectives of the DMA, how it proposes to address the issue of concentration of power in digital markets and if the protection of freedom of expression and other human rights was one of the goals pursued by the DMA. So, Bruno, I would appreciate if you could start by introducing yourselves and let us know a bit more about the DMA. Thank you so much for being here.


Bruno Carballa SmichoWSki: Hello, thank you very much for the invitation. Hello everyone, I’m Bruno Carballa. I’m a research officer at the European Commission’s Joint Research Centre, which is an institution of the European Commission that does research to support evidence-based policy, including the DMA. I’m an economist working in the digital markets research team. So, I will try to walk you through in a couple of minutes the spirit of the DMA, to explain the Digital Markets Act, also called DMA, and how I think it links to the issues, the broader issues that were being discussed today. So, perhaps a very small disclaimer about what the DMA is. Oh, can you hear me? Yeah, you’re back. Ah, okay, sorry. So, as a first clarification on the Digital Markets Act, or DMA, it’s a regulation that has, let’s see, a pure economic… objective, which is precisely to reduce the market power of the so-called gatekeepers. I will come in a second to which of these platforms are called gatekeepers. But this obviously has indirectly an effect on the capacity of these platforms to abuse their power in non-economic ways. This is more of the goal of discussion of this forum, such as all sorts of human rights violations. That said, obviously there are other regulations that have a specific target that are non-economic and have more to do with human rights. So I’m thinking specifically of the DSA, Digital Services Act, which is a kind of a companding sister regulation that aims to curb issues such as disinformation or discrimination and so on. So that said, I’m going to try to walk you through what is the spirit of the DMA, how does it work and what are the expected effects of this new regulation. So first thing in terms of timeline, it’s quite a recent regulation for the legal time of application. It entered into force in November 2022 and it became really applicable in terms of articles in May 2023. So we’re talking about two years now of the DMA, which given the length of this type of cases is quite young. We’re starting to see, I will talk about this in a couple of minutes, the first decisions on how companies are or are not following the rules of the DMA. So the idea of the DMA, as I said before, is to curb the power of the so-called gatekeeper platforms. So for that first, it defines what the gatekeeper platforms are. So for different criteria, and there are both quantitative and qualitative. So the first one is it has to be big platforms, so it is not to regulate every single platform on the internet which would be practically impossible, but those that do have a much stronger impact. So in that sense, the first criterion is that these platforms have to have at least 7.5 billion in the last three years of revenues or market cap, so to show they have a big economic power in terms of size. They have to be part of one of the so-called core platform services, so these are services that are deemed to be particularly important in the digital space, so online intermediation could be any sort of marketplace, search engines, social networks, video sharing platforms like YouTube, number independent communication services are basically messaging apps, virtual assistants, web browsers, operating systems, cloud computing, and online advertising, and this is the first list that is going to be revised, and one of the discussions for example that is going on right now is should we include generative AI under a new category such as GPT and so on, or does it actually fit to be into one of the categories which is the search engines. So these are basically platforms that are in critical areas that are important in terms of size and therefore potential impact, and that have been in a durable position so on, so it means that these criterias have been met in at least the last three years, meaning that it’s not just by chance that they had a lot of users seasonality-wise, and then so there are these means these platforms have been there having power for at least three years. So and what is the aim of this, why this new regulation? Well the main reason is that the existing regulation and competition law which is aimed to sanction anti-competitive behaviors usually It actually has, for many different technical reasons, a difficulty in being applied to certain conducts that are typical of these platforms, and comes in too slowly. So the idea is to regulate it ex-ante. So before any abuse of power can take place in the economic sense of the word, try to create new rules, new obligations to these platforms so they cannot abuse their position of power. So once the platforms are designated as gatekeepers, and here you have the usual big platforms that you all have in mind. The designated ones are Alphabet, so that’s the Google conglomerate. We’re talking about Amazon, Apple, Bytance, which has TikTok, Meta, where all the Facebook family products, Instagram, so on, and Microsoft. So we’re talking here about, let’s say, the main platforms that have the most powers in the internet. So these gatekeepers that have been already designated because they meet these criterias that I was mentioning before, have new obligations they didn’t have until two years ago. So these obligations are of different ways of trying to make the platforms not abuse their power. So the first one is they have to allow business to offer the products and services outside of a platform. So there’s been many cases where, for example, an app by a small developer or even big developers have the issue that they have to go through the app store, which takes a big cut, usually around 30%, and they cannot promote in any way a link to, say, pay outside of the platform or allow the platform system and take business outside. So the platform is kind of abusing the fact that it’s precisely the gatekeeper between people who have phones and people who want to reach apps, because the only way to reach apps is through their store, and they’re using that to, say, extract all this value from the apps. And that, in turn, obviously will end up not benefiting consumers, because then apps are going to be… And then there’s other provisions about usage related to the access to data, usually the business users, meaning, for example, an app or meaning a seller on Amazon, usually don’t have access to the data about the people they interact with, which they could use in their daily life. So, the new obligation is that they have to be given access to this data to better compete. Another third obligation is to allow users to uninstall pre-installed apps. So, you see many platforms have used the fact that they run also the operative system of a phone, for example, to pre-install apps you cannot uninstall. So, Safari in Apple is a classic example. So, you end up using their browser because they put it there and you cannot take it out. So, now they’re obliged to allow you to take it out. So, again, there could be more competition and new browsers can come in. And if you want, for example, a privacy preserving browser like DuckDuckGo, you can download it and even uninstall the other one, which should not be predetermined to take you always to remain within the ecosystem of the dominant platform. Another obligation is about refraining from combining personal data from different platforms. So, for example, Google obviously has a lot of different platforms about the same users. So, they know where you go looking for food in maps, they know what you look for in a search service, and they can combine that personal data. So, if you don’t consent to that, they should not be able to do that. And not to use the power of merging data from many markets, so nobody can challenge them in any market. Because obviously it’s very difficult to replicate the fact that a few gatekeepers have access to multiple sides of our lives as users, or think of this even for business users. People who use a Microsoft suite, who have like the cloud and the operative system. So they collect all the data and it’s very difficult for someone from a non-GitKeeper platform to replicate that. And in the same spirit, another important obligation is that of ensuring interoperability for third-party software. So that’s a classical problem that a lot of complementers in the ecosystem are facing is that because they’re not interoperable, they can’t add services on their own. Finally, about the advertising market. Same about having more transparency about the data and the pricing of advertising, because those markets are very concentrated into basically Google and Facebook, so Alphabet and Meta, that they control pretty much all the value chain of online advertising. So again, it’s very difficult to compete with that, which in turn leads to higher prices for advertising and eventually to higher prices for we consumers of anything that uses advertising online. Finally, other obligations include allowing app developers to use third-party payment systems. So in the same spirit of letting them doing business outside of the Gatekeeper or not going through the Gatekeeper, they should be able to pay with something else that, for example, Google Pay or Apple Pay. And the so-called anti-self-referencing obligation, which is when a platform also is a seller in the platform or has another product, has an obligation not to push users to use it. So for example, when you used to look for something on Google, you might find that always the link goes to Google Maps. Now in Europe, it doesn’t do anymore because of this. Or Amazon that tries to allegedly push its own products, so you end up buying the Amazon basics in the platform, not independent buyers, independent sellers, sorry. So these type of things now are being scrutinized by the Commission to make sure the platforms, again, do not use. their power as gatekeepers to make the competition less fair and therefore to harm consumers because of less competition. Then lastly, the obligations about not preventing users from switching between apps by making it difficult from a technical point of view if you want to change provider and inform the commission about any potential acquisition that might impact this. So these are the obligations as you see they’re all aimed at again regulating platforms are big they have a lot of impact in specific markets are critical and trying to make them new rules, obligations are like asymmetric in the sense that only this big platform have these obligations and not the small ones so as they cannot abuse their power and harm the competitors and consumers. And where are we with this now? Again this is young, it’s only two years but in these two years we already have four cases open, three against Apple and one against Meveta so basically against Apple we have one against this anti-steering or anti-self-preferencing this idea that the platform might benefit its own products using its power as a gatekeeper and so far Apple has been hit a fine of 500 million so far so this is all public information you can check the decisions and the whole process. Also Apple has been open a case in terms of the issue of non-compliance with the choice screen so the idea that to give users other options you should give a choice screen for example when you want to open a link do you want to use Apple’s browser or do you want to use also other browsers? And the commission found they’re not being compliant with the way they’re implementing this because they might be trying to trick users into still using their own browser despite the fact of a choice screen. Another case that’s being open against Apple, again, the third one, is about the specification decisions on Connected App. This is a more technical one, but it’s about basically how Apple is implementing the interoperability, and the case is about the commission saying you’re not really making this interoperable as it should be to make it easier for any third party who wants to add products to your ecosystem. And finally, the last case ongoing is one against Meta. It is basically challenging this consent or pay model, the idea that you cannot use a product unless you consent to what are deemed abusive terms in terms of access to your data and use of personal data, because basically they’re saying, well, Meta is still not offering a free equivalent and less data-intensive alternative. So there basically is either you use my product for free and we abuse, let’s say, the data we collect and we exploit from you as a user, or you have to pay to me. So the commission is saying there has to be some middle point. And so far, well, Meta has been fined 200 million on this, and this is obviously in appeal. But as you see, in two years, we have already four cases open and more will probably be open or scrutinized in the future. And hopefully if the application of the DMA is effective, we should be seeing digital markets in which the dominant platforms should have less capacity to abuse their gatekeeping power, which should, again, benefit consumers and in turn, give them less power to abuse consumers or users in other non-economic ways. Thank you very much.


Raquel da Cruz Lima: Thank you so much Bruno for bringing this great perspective from the DMA. You were quite clear that the objectives were really related to the economic field, but we heard some concepts that are really close to human rights, such as to prevent that the platforms not abuse their power. And also the concern about not harming consumers. So I’d like to turn to the Global South and ask you, Camila, if hearing about this idea of gatekeepers that the DMA had in mind, do you think that maybe from the perspective of Brazil or the Global South, we could consider the major digital communication platforms as gatekeepers of human rights? And do you think there is a link between economic concentration and fundamental rights? And also, Camila, if you could start introducing yourself, I’d appreciate it. Thank you.


Camila Leite Contri: Of course. Thank you so much, Raquel. Article 19. It’s a pleasure to be here in this panel with you. Short answer, yes, I’ll go for it. But it’s a pleasure to be here representing IDEC. IDEC is the Institute for Consumer Defense. It’s based in Brazil and has more than 35 years of experience in protecting consumers through advocacy, campaigning and strategic litigation, including against big techs. I have a background in competition law as well, so disclaimer. But I always felt kind of isolated in both fields, both in competition law, where you’re talking about human rights in the digital sphere, and both in civil society, in the human rights side, talking about, talking within the language of market. So I think that my personal, I would say my personal goal, my personal willing, is try to connect the both fields to answer this question and to have more people breaking this barrier to understand that, yes, monopoly competition issues are key to human rights and we should analyze them together. But the reality is that, for example, this is, I believe, the only panel in IGF that we are talking about competition or anti-monopoly, and I don’t say this is a personal criticism, but the need that we have to discuss this more. And I think this is a consequence that we still have this pervading narrative that in the market we should to understand that competition authorities currently have the attribution and the power to consider this kind of consequences, maybe in a mediate way, so indirect way, but monopolies, the concentration of economic power, are foundational to most of the issues that we see, not the only ones, but mostly of the problems. We currently have a society that is tech-mediated, our citizenship is tech-mediated, and I can personally talk about Brazil, sharing some experiences on how Brazilians deal with internet and especially lower classes. IDEC has a research on how lower classes uses the internet, and in Brazil we still have this zero rating practices, in which people that use prepaid mobile data, so people that have data caps, they mostly use the platform, the applications that don’t spend their mobile cap. So, we currently have people that have, for example, per month, four gigabytes, and WhatsApp, Facebook, and TikTok don’t spend internet, so why would someone have an incentive to use another platform, and how this is important to how the debate is developed on how people express themselves. So we currently have an issue of concentration on the discourse, the possibilities on how we can express ourselves, and I think this is one good example. The second thing is… The way that we use platforms, it’s beyond not being a choice that we currently have. The platforms are profiting for, sorry I forgot the word in English, but for political disputes, extremism, and this kind of discourse unfortunately is how it affects freedom of expression. But meanwhile platforms are gaining, are profiting from that. So this is very concerning. And the third point that for me it’s an example on how the economic power also translates into other kinds of power and indirectly or directly affect human rights is how they interfere also in the political dimension on the political discourse. And for that I would like to bring a concrete example on how one specific big tech influenced the public discourse in Brazil. But first let me get back to what Bruno said about the DMA. In Brazil we are currently discussing not a DMA but a possibility of developing a new regulation, not a regulation but a way to improve the attribution of the competition authority to deal with digital markets. And although human rights is not embedded in there, some examples on how the DMA could be interpreted as having a good consequence in human rights could be also imported in Brazil and adapted to Brazil. The limitations on data sharing and the prohibition on the payer consent, so the prohibition on people having to decide whether they have their rights respected or would they have to pay for it. It’s a good example. The second thing is that creating possibilities for users to choose the platforms that they use could also mean having platforms that have moderation rules that are less restrictive in freedom of expression and could also promote other rights and, for example, limit misogynistic speech. And the third example, and that’s… That’s why that is when I will enter in the concrete example in Brazil is about limitations on self-preferencing. So Bruno mentioned the example that Google cannot, when you search for a place, cannot in Europe move directly to Google Maps because this is a way of self-preferencing another Google service. And in Brazil, we had an interesting case that was presented before the competition authority CADE that was about maybe a political self-preferencing. So during the week of the votation of the Brazilian DSA, the Brazilian Digital Services Act, which was called publicly the fake news bill, Google put in the main website. So below the search, they put, how can the bill, the fake news bill can worsen your internet? Sorry, the fake news bill can increase the confusion on what is true and a lie in Brazil. And this phrase directed to a blog post saying that this bill can worsen the internet as you know and could change the internet for worse. And when would you search for a fake news bill? The first link that would appear would be a promoted, a sponsored link by Google saying no to censorship bill. So how can we have a free space of debating when the, I wouldn’t say the only search platform, but basically the only search platform that people use in practice put this and change the whole debate. Is this a free way to interact in platforms? So trying to move more on what we can do about that, we have this, I believe that we have a common sense that this power is exercised in different ways and economic power can be translated into political power and this has consequences on human rights. So what we can do, maybe as civil society, empower ourselves to also talk about this market language. We need not to… Juan David Gutiérrez, Juan David Gutiérrez, Hannah Taieb The title of this work is the unbundling of content curation and hosting services. The proposal that Article 19 has related to timing the tax. I’m happy to continue talking. Thank you so much.


Raquel da Cruz Lima: Thanks Camila. This was so great and spoke so close to my heart because I’m only a human rights lawyer. That’s only my only background and for me everything is new now, discussing competition law and antitrust and I think what is really powerful of being here at the IGF is exactly that idea of bringing people together from different sectors. The opportunity to talk to business and state, civil society and academia. This is exactly what we need and I don’t think that we need to go back and forget our backgrounds but exactly put them together and make it more powerful. And I think that something you said was quite important. The idea that business and also other authorities, they are all obliged by the constitution and in the human rights field, especially in the international community, We have long discussed the duty of a control or conventionality by every member of the state. So whatever their conduct is, they have the duty to take into consideration the international treaties that were ratified by the states. So why human rights not taken into consideration when competition is discussed and also when other actors are, especially in this field, as you said, when tech is mediating the access to every kind of right, we need human rights to be taken into consideration and we from human rights backgrounds also need to learn more from business, from competition law and so on. So with that, I’d like to turn to you, Hannah, because as Camila said, that there are discussions in the private sector that have impact on our rights. And I think you have now a great experience to share with us with what we can expect in an environment with more competition, as Bruno brought, what kind of business opportunities are there that can emerge and how those opportunities may take us to business that are more aligned to human rights goals and standards. And also, please, if you can introduce yourself as a beginner. Thank you so much.


Hannah Taieb: Thank you. Thank you very much for having me here. So I’m leading business development for Speedio, which is now part of Mediagenix, a company that is specialized in the commercialization of recommendation algorithm that we want ethical, controllable and accessible. We are specializing in the entertainment sector, working with players like Claro in Brazil, Sky, DirecTV, Latam, Canal+, Globo, TV5, and I’m also, I have a background also in consultancy for the public institutions on how to implement more ethical algorithm globally. So as you were saying. Indeed, private companies, whether solution providers like we are, but also traditional media outlets or even social media platforms, while pursuing, of course, profit and protecting their own interests, also bear the responsibility not only to respect the law, but also setting the standards for ethical and transparent AI, especially in media and entertainment. Our influence, shall I say, goes beyond business operations. They help shape the very technologies through which millions of people engage with culture and information every day. So the way those companies, including us, we are doing business, directly affects our rights, but I mean our rights as civil society, to information, free speech, media freedom and privacy, which are, I think, the human rights we are discussing today. So one trait we can observe is a raise of distribution of information without anchors. So it means that it brings media fragmentations and a decline of editorial authorities. So as we all know, content and information now circulates primarily through social media, which is leading to, indeed, a monopoly of the big tech on the distribution of information. So the presentation of this content is governed by algorithms that remain opaque and inaccessible to most users. And how personalization is done in that case, as you know, largely by collecting private data with no or little regard for actual transparency or contextual understanding. So this lack of contextual framing, it contributes to the spread of misinformation. It weakens the audience’s ability to detect bias and undermines the visibility of sources. So, you know, as earlier generations were encountering information with very well-defined context in… In environments such as newspapers, like reading the New York Times or watching the BBC, it implies assumptions about style, tone, political orientation, which in other words we can call context. But today, many young users, the only information they encounter comes through feeds, where logic is invisible and intrusive. Medias become more ambient and anonymous. The user is exposed, but not oriented by an editorial line. I think here we are all familiar with the theory of the filter bubble and how the impact on democracy is not to prove for the past 10 years. It affects public discourse and political life and access to shared truth. From another angle, traditional media organizations are increasingly burned by economic pressures and difficulty to achieve profitability. It undermines their position and their capacity to provide quality information. Many of the traditional media outlets that we are navigating towards today are financed by either wealthy owners or public funding. Even media platforms that in the past were a tribe on the attention economy, relying on advertising, are today facing financial difficulties because everyone and the advertisers’ budgets are going towards individuals, and by individuals I mean influencers, and the creator economy has become a dominant force. The data is here, and at the same time, the branded content of the past decade has blurred the line between advertising and journalism. I think these shifts raise important concerns about access to reliable information, and especially in the monopoly of social media we have today. Of course, introduction of various international regulations that are designed to address this issue. So the question that we might raise today is, is this an opportunity to rethink the business models in support of human rights? The individuals are gaining traction of their institution as established media outlets face mounting difficulties in reinventing themselves and preserving their relevance. Of course, even if this transformation is not necessarily negative, because social networks have allowed new voices to emerge that are less dominant, and it has given visibility to creators who produce originals and sometimes very relevant work. However, with lower entry barriers, the distinction between influence and expertise becomes blurred. And in many cases, creators with little journalistic background, like, you know, come in more attention, both economically and in terms of audience than professionals that are trained to verify and contextualize information. I’m not even mentioning the rise of Gen AI that will, of course, add more and more, you know, non-verified content to the already massive ocean of content that we have today. So, to summarize, facing this abundance of content, it becomes essential to imagine new economic models that are ethical, content-centric, and less dependent on advertising revenue, and designed to restore clarity and control to both users and producers. But there is some promising direction, of course. First, for like the traditional media outlets, calling that in opposition to social media. When it comes to that, what shall be suggested, we believe, is that we should push a voluntary, proprietary platform and a sovereign algorithm. What it means is that when it comes to preserving access to information, one strategy is to support or to develop independent platforms that blend algorithm, curation, and editorial supervision.


Raquel da Cruz Lima: Perfect, that’s so great and so powerful choosing to prioritize this idea to keep in mind. And also something else you mentioned, Hannah, I think it’s important not only to digital markets but to the whole media in general. I heard a lot yesterday and the day before about trust. And I think when you mentioned that the users have to know the logic, it must be explained to them what is there. At least in Brazil, that also applies to traditional media because often the positions of the traditional news outlets are not quite clear. They do not make also transparent to us users why you read from that perspective different stories. So I think transparency is also always a key issue in building trust and enabling freedom of expression and access to information. Right now we should have our fourth panelist, but he couldn’t join us. So I would open the floor now for any questions or interventions you’d like to make. We have around… 12 minutes so it’s actually quite good time to hear you online and also here you can talk from the mic or come to the round table and please introduce yourselves when you’re making your question.


Audience: It’s working yes can you it’s a silent section we can hear you i’m laura i’m in the youth program i’m from brazil too and i loved what we discussed here your panel was amazing but i wanted to know in in a competition scenario how the global south could increase the protagonism when we don’t have the infrastructure to have our own means like you have a monopolization from google from meta and we can start our own social media our own platforms we can have it but as you said google is the main used how do we get some protagonism in this thank you.


Raquel da Cruz Lima: You can go.


Audience: Thank you for this workshop it was really interesting i’m joão i’m from brazil too i’m in the youth delegation i would like to ask especially thinking on the dma in the european union we see some changes for example in the app store in the ios in general like there’s alternative marketplaces and when we look at the european union we see alternatives are created but i would like to ask how How to overcome, like, obstacles regarding incentives to users, because although alternatives might be available, like, in opposition to big techs, how can incentives to use big tech services can be diminished or overcome in a context where it’s sometimes easier to use big tech services or platforms, and although these regulations, especially in the European Union, try to deconstruct that and try to change the institutional arrangement, but how can in practice people feel incentivized to not use big tech platforms and services? Thank you.


Raquel da Cruz Lima: Thank you. You can go.


Jacques Peglinger: My name is Jacques Peglinger. I’m… A bit louder, please. My name… My name is… Thank you. Good. You can go. No? Yes, you can go, please. My name is Jacques Peglinger. I’m from business side, but I’m also teaching digital regulation at a Dutch university. So my question is primarily to the first speaker, who elaborated very well about the Digital Market Act from the EU, but there what we see is, of course, in Europe, these very fragmented local markets. The DMA basically addresses European-wide big platforms, but what about the local champions? And there the question then, how is Brazil handling local champions, and are there or they are just really nationwide platforms? Thank you.


Raquel da Cruz Lima: . I’m going to pass it over to Beatriz.


Audience: Thank you. Is it okay?


Raquel da Cruz Lima: Yes, please.


Audience: Hi, my name is Beatriz, I’m also from Brazil, but I’m currently a assistant professor in law at the University of Sussex in the UK and one of the things I teach is Internet law regulation, platform regulation. I’m interested to hear from the panel, what do you see in terms of the perspective of the government and the regulatory perspective as well, the need to empower organisations to join the conversation, human rights organisations, people involved in platform governance more broadly into this kind of more economic-oriented or market-oriented aspects of regulation, but I’m curious to hear from the panel, and maybe Bruno, but I know members of the panel have also been studying that. Let’s start with you and then, Ritja. Let’s take a look at informing the adjudicators on the market-oriented conversation, but I do believe that connecting platforms is really a engagement and more of a holistic conversation about how to regulate platforms, not only from this market or economic perspective, but also from the perspective of data protection. So, let’s start with you, Ritja. You mentioned data protection and the GDPR, and that is some kind of, at least in Europe, relevant case law about how considerations of data protection could inform and kind of delimit it, the bearing between what is kind of acceptable and what becomes anti-competitive behavior. Do you see the perspective of issues that have the advantage on equal status or on equitable status as well, or is it just a misconception of what methods are used to help to draw the boundaries in terms of loss of dominance in competition? Are there limits to the decision-making process? Thank you. And I think, I mean, more broadly, this would also help to kind of counter some narratives that we see that there’s a conflict between the two. When the digital markets kind of bill in the US was being proposed, there was a debate among some academics that breaking up kind of the digital public sphere into small players would be harder to control in terms of, I don’t know, hate speech or platform regulation models. So like this relationship between market structure and how to hold them accountable is not an easy one to tackle. But I would say that it’s important for regulators as well to have this perspective of how the things joined together. So, yeah, I’m curious to hear from you. How do you see that?


Raquel da Cruz Lima: Great. I think we don’t have any other questions. So I’ll just add to that before I give the floor back to our panelists. The first question I would add is for all of you. If you see any priorities in terms of regulations now to increase competition and also a market more respective to human rights. And the second question, I think, is more directed to Hannah and Bruno. Hannah mentioned a bit about advertising. And I would like to know if, from an European perspective, you see any changes already because we also have concentration in the market of advertisement. So do you see any changes in breaking a bit also the market of advertising and making it also more aligned to human rights? So I think we start with Bruno. We have around seven minutes for each of you to answer the questions and also make your closing remarks. So Bruno, you can start, please.


Bruno Carballa SmichoWSki: Thank you very much. Lots of good questions that I’m going to try to squeeze in answers to in a short time. Again, this will be pretty much my own personal opinion. and a commission official one. So perhaps with the first question about alternatives, my personal view is that there is no magic one-size-fits-all solution to this. And especially, let’s say, for countries like Brazil, I am myself Argentinian as well. So I understand, let’s say, where you’re coming from. So I think different ingredients can be added to alternatives. One is for certain more infrastructure-like parts of the digital world. Some public alternatives, like Brazil, I think it’s quite exemplary with PIX in this case, can counterbalance market power in a very strong way. But of course, these have to come with a proper regulation that makes them a real alternative, like it happened in the case of PIX, actually. We’re discussing that this time in a workshop in Rio with people from CAGI and Ministry of Finance. And when I asked them, why do you think it was a success, this public alternative? And they told me, well, basically because we forced all the companies, digital and non-digital, to be interoperable with PIX. Obviously, the technical solution had to be good, useful, practical. And I think the big tech are already very good at doing this. And the public sector could replicate it from a technical point of view. But it came up also with a good regulation that made it a real competitive constraint on any other service. So people can use whatever service they want, but the fact that this one exists that is interoperable with all of them, gives much less power to any platform that could have been imposed as the gatekeeper of digital payments. So that’s one part of the solution. I see in some areas, not in all, I don’t think you can do this for social media in a very effective way, for example, certain public infrastructure layers. So we can think of the same in terms of cloud, in terms of certain parts of the digital chain. Then, let’s say as the government itself, at least for critical things. I think open source alternatives are to be promoted. For instance, to me it should be clear that government offices should be using open source by default and this could be transmitted to public procurement, for example, requirements. And then for those things that I think from an economic point of view don’t make much sense to be, let’s say, publicly owned or with good regulation. I think that’s what we’re experimenting with. The DMA perhaps was the first one, but I see Brazil is making nice advances in that respect and regulation of all sorts. I’m talking not only about the economic one and here I think it’s a trial and error that is going on throughout the world. Perhaps the European Union was the first one, but we’re seeing the UK. A very similar legislation was already put in place and Australia is discussing the same and many other countries are following. So I think here we’ll have a nice laboratory of what works and what doesn’t. And in that way, perhaps being second movers, it’s an advantage for countries like Brazil, because they can already learn from the mistakes we will make for sure in the European Union. About local champions, I think you’re right. Indeed, the DMA explicitly targets, not by design, contrary to the DSA, Digital Services Act doesn’t require that the platforms are active in many local markets, in many countries, to be more specific. But obviously, given the thresholds of size, it ends up always being, and given the particularities of digital markets, it ends up being usually European-wide type of platforms or even international ones. That doesn’t mean there couldn’t be any fostering of local champions. I’ve heard there’s a parallel discussion going on, as you may have seen in the world, but also in the European Union about industrial policy. and Digital Industrial Policies. For example, for AI, there’s a battery of new legislation. The pipelines and strategic plans are already in about how you could foster those champions in the AI chain. So, I see those two as complementary types of regulation. Industrial policy on one side, and regulation of the existing big ones. Again, it is more of a practicality point of view. Why not at each market level? Because it takes a lot of time and effort to regulate. So, you’ve got to aim for those that have the higher impact, which end up being those that are very international. Then Beatriz’s question on regulation. Nice to see you again, Beatriz. So, about the dialogue between types of regulation. It’s actually something that is happening from the inside already. For example, between DMA and DSA. So, DSA is about systemic risks like misinformation. Some platforms are obviously regulated by both regulations. And I think, indeed, there’s a dialogue in two ways. One, in terms of procedures. For example, the DSA, in terms of procedures, is very similar to competition cases. Actually, colleagues from DigiComp are helping colleagues in DigiConnect in how to, from a procedure point of view, carry out these investigations, although the object is very different. It’s not economic, but about fundamental rights. The procedure is very similar. So, I think there’s a lot to learn from the longer experience of competition law. And vice versa, in terms of, say, in both ways. I foresee and already see, let’s say, from some colleagues’ work around the whole, in terms of the methods. For example, self-preferencing, which we mentioned already in two presentations, is a classic example. Because self-preferencing, let’s say, the way you could monitor this from an algorithmic point of view, could be enhanced by the techniques the colleagues are doing. the DSA are developing to monitor harm to users. So I see a cross-fertilization between those types of regulations in both methods and procedures. And then at the more, let’s say, political level, coming back to the first question, I think there’s a lot of everything to gain between different jurisdictions in learning from the different institutional designs. Obviously in the Commission, they ended up saying, DSA should be one legislation, DMA should be another. At the beginning, they were thinking of making actually the same big regulation. Then they said, let’s go for one that is economic and one that is fundamental rights, although they overlap in the types of platforms they are going to regulate. But that’s an institutional design choice. It could be the case that some other jurisdiction decides to put them on the same umbrella. It wouldn’t be necessarily bad. And I think there’s everything to gain in dialogue between jurisdictions about the institutional design and what worked, what didn’t work, what can we learn from previous mistakes and previous successes. Advertising, to finish on this, I think it’s still too soon to tell, because the decisions in advertising are still ongoing. These are highly technical matters and they require time, just like competition law. I mean, in my personal opinion, I think obviously it was too late in the sense that all the chain of advertising, as put in a very nice report by the CMA, when they did a market study on this, is highly concentrated in two firms. And that’s a problem. But I mean, at this stage, what we can only expect is to do good regulation, I think. And if anything, in terms of harm, let’s say they’re non-economic, I think here is where the DSA, the Digital Service Act, should kick in. In the sense that if those target advertisements, for instance, promote eating disorders to minors, because that’s a…


Raquel da Cruz Lima: Thank you, Bruno. Just a small footnote for everyone here who is not from Brazil, or so familiar with Brazil. Bruno mentioned an important experience from Brazil that is PIX, it’s a pain method, P-I-X. Also for our reporters not from Brazil, just to mention. So now, Camila, you have the floor, you have seven minutes.


Camila Leite Contri: Thank you. First on the question, uniting the questions both from Laura and from Juan. Happy to see you people interested in this issue, come join us in these discussions. I think you brought a good example on how network effects work in practice. So we are in platforms because our friends use these platforms. We are in these platforms because we have content created for us. How can we let go of this if everyone, sorry, I’m hearing myself. If everyone is there, so it seems like a chicken or egg problem. Laura was saying, how can we move to alternatives and we don’t have these alternatives, but everyone is on the other platforms. So yeah, this is challenging, but having alternatives can at least make people think more on the possibilities that they could have. Otherwise, we still continue in the situation that we are enclosed in this kind of platform, where there is a literacy, maybe digital literacy work as well that we have to do. But on the questions related to alternatives, I think there are some things that we can do right now and some alternatives that we can still promote in a longer term. The first thing that competition authorities could do is to have bolder theories of harm. And using this competition law jargon, theories of harm is basically how competition authorities judge the competition case. And breaking up companies. And as we see that they have an unmeasurable impact in our lives, maybe the solution is that they didn’t have to be that big. So that’s why I praise the solution presented by Article 19 on unbundling, for example, Host Creation Services. Another concrete example is that I mentioned the judgment on data protection and competition law in the EU. So in this Facebook judgment, it was a decision made by the Germany Competition Authority, and this case went to the European Court of Justice. And the judgment of the European Court of Justice was about if the competition authority in Germany, could interpret a data protection violation as a competition breach. And they gave good parameters on how authorities can consider a breach in another law inside their attribution. So in this case, the solution was that the competition authority would have to analyze if the data protection authority had a similar decision. If yes, they could not depart from it, but it could have their own competition law conclusions. If there isn’t a similar decision by data protection authority, they could consult them and seek cooperation. And if they didn’t present any objection, they could continue with their own case. So yes, we are thinking about data protection and competition. Why can’t we think this about human rights? Why can’t we understand the human rights impact and bring this into competition authorities? But this demands bold public servants. So Bruno, I know that you are in the European Commission, I’m really happy that we can share this panel and I see your availability to discuss this, to have these discussions with us and I hope other authorities, such as the Brazilian ones, have this same openness. And I do believe so.


Raquel da Cruz Lima: Thank you, Camila. Thank you Camila, you were so precise with your time. Now, Hannah Taieb, if you can make your answers and closing remarks, please.


Hannah Taieb: Sure. So first thing to jump on what Camila was saying, I think that we, technically speaking, from just an algorithm standpoint and a personalization standpoint, doing personalization as it’s done on social media, not in terms of advertising, but really in terms of user experience, meaning having a personalized feed on whatever social media that we are using. It’s absolutely possible to do it while respecting GDPR and still having a very good user experience. The fact that we… like that many uh big tech and big social media platform have integrated the fact that you need to use um anything any sensitive data meaning like um you know gender age whatever any demographic in order to have like a um a good user experience it’s actually not true it’s it relies on that for advertising perspective but not for user experience so i think that from from a private sector point of view i think that maybe regulation on that could be like a bit stronger and and it will still probably not harm um some part of the business especially if we are looking for a more virtuous way of monetizing media anyway um and i don’t think like relying highly on advertisements the way it is today of course it it brings other problems like such as the openness of platform because let’s let’s be realistic the fact that most of those platforms are free because they’re relying on on advertising so not not that subscription or shared participants is an ideal solution but just that it’s it’s a piss for reflection for sure then the question i think is also what are we looking for are we looking for informations are we looking for um on interacting with our friends and to and maybe the platforms where we are you know trying to have like the best experience combined in one is probably like not viable in the future and so for instance uh when talking about like brazil not having like um its own uh infrastructure and i think there is also layers between choosing um edible us or google for big media companies like i’m talking about global for instance like which is i’ll say a local champion right um in the The way Globo decides to push their information to their users on GloboPlay, they could rely on the Google algorithm, or they could rely on the proprietary algorithm, which is brought either by small vendors like us, but also an algorithm that they will develop themselves. But for that, of course, you need subvention, either for small, more ethical tech vendors, and also for the company itself, the media outlet itself. I think that there is also room to incentivize private companies to do more open source, because today, honestly, it’s very complicated to be able to create innovative open source solutions that scale for smaller vendors and vendors that care about ethics. To do that, I think there should be an incentive in terms of regulation or in terms of subvention. I don’t have the answers. I’m not a regulator myself, but for now, it’s actually just a matter of willing, and I think it’s not enough to encourage that. This is, I think, what could be interesting for innovation at scale. And then in terms of advertising, of course, the market consolidates, and I think we are still watching the decline of cookies and still looking forward to a new way of doing contextual advertising, meaning having also a proper way of… And technology will help that to explain also why an ad is suggested to the user from the same principle that why information is pushed to the user or why content is pushed to the user. But still, I think today it’s not enough. and as long as the model will be relying on advertising, it will be very complicated to fight against that kind of lobbying from the advertiser and without killing the market and the media market. I think we still have a lot of stuff to do before that. So hopefully, yes, I’m pro-stronger regulation on that part, I think.


Raquel da Cruz Lima: Thank you. I’ll give the floor back to Camila. If you can have one minute for each of you to make a closing remark. You can start, Camila.


Camila Leite Contri: Thank you. Just one thing that I wanted to react related to what Hannah said and Bruno is related to clouds. And in Brazil, we are still very dependent on big tech clouds. And this is also a matter of data sovereignty. So Brazil should focus on this and also pay attention on the revolving doors. Because, for example, in the health public sector in Brazil, we had an issue that the person that was working in the Brazilian government went to the cloud company and then came back to the government, which can bring some concerns as well on how we can create alternatives. So my final point would be having this kind of discussion in Brazil about funding some alternatives about digital public infrastructures, for example, and how we can create alternatives from small companies, but also from the public sector beyond regulation, of course. But it was a pleasure to be here. I’m very excited and happy to continue these kinds of discussions. Thank you so much.


Raquel da Cruz Lima: Thank you, Camila. Bruno, would you like to say some final words?


Bruno Carballa SmichoWSki: Not just that. I would like to repeat the words about the interest in dialogue across both disciplines and jurisdictions. I’m very happy to continue. Thank you very much, and thank you, everyone. Thank you everyone for your attention and articles and for the invitation. Thank you.


Raquel da Cruz Lima: Thank you. Hannah?


Hannah Taieb: Just to add something is maybe like forcing a little bit more on interoperability and algorithm pluralism I think would be like great in order to have like a better distribution for information and the technical solutions again are here. It’s not a matter of technicalities. It’s not a matter of open APIs or however you call it. It’s a matter of regulation and goodwilling for the big tech to do it. So if we have to count on that, I think, you know.


Raquel da Cruz Lima: Perfect. Brilliant. And just to finish, I would like to invite you all to access our policy paper by article 19. It’s called Taming the Big Tech. We have a Portuguese version for everyone. It comes from Brazil that is hosting our website. And as Camila briefly mentioned, we explore the idea of unbundling in social media the service of hosting and curation and that also would be possible having more interoperability and would help have incentives for users to leave the big platforms and also for business. There could be other models of business working with curation and offering other kinds of standards for how we interact with our friends, the content that we see, have more transparency. So you can check it out on our website. And thank you all so much. I think we can end with this idea of being a bit radical, a bit more bold. Maybe we can tackle the power of the big tech and have a more diverse Internet. Thank you all so much for joining us today.


B

Bruno Carballa SmichoWSki

Speech speed

157 words per minute

Speech length

3648 words

Speech time

1387 seconds

DMA targets gatekeepers with economic objectives to reduce market power and prevent abuse

Explanation

The Digital Markets Act is a regulation with pure economic objectives aimed at reducing the market power of so-called gatekeepers. While it has indirect effects on platforms’ capacity to abuse power in non-economic ways like human rights violations, its primary focus is economic regulation of dominant platforms.


Evidence

DMA entered into force in November 2022, became applicable in May 2023, and targets platforms with at least 7.5 billion in revenues or market cap over three years, operating in core platform services like search engines, social networks, messaging apps, etc.


Major discussion point

Digital Markets Act (DMA) and Competition Regulation


Topics

Economic | Legal and regulatory


Agreed with

– Camila Leite Contri
– Raquel da Cruz Lima

Agreed on

Economic concentration directly impacts human rights and requires integrated regulatory approaches


Disagreed with

– Camila Leite Contri
– Raquel da Cruz Lima

Disagreed on

Primary regulatory approach – economic vs. human rights focus


DMA creates new obligations for platforms including allowing business outside platforms, data access, app uninstallation, and interoperability

Explanation

The DMA imposes asymmetric obligations only on large gatekeeper platforms to prevent abuse of their dominant position. These include allowing businesses to operate outside the platform ecosystem, providing data access to business users, and ensuring technical interoperability with third-party services.


Evidence

Examples include allowing app developers to promote payment outside app stores (avoiding 30% cuts), mandatory data sharing with business users, allowing uninstallation of pre-installed apps like Safari, prohibiting combination of personal data across platforms without consent, and ensuring interoperability for third-party software


Major discussion point

Digital Markets Act (DMA) and Competition Regulation


Topics

Economic | Legal and regulatory | Human rights


Agreed with

– Hannah Taieb

Agreed on

Interoperability is crucial for creating competitive alternatives to dominant platforms


Disagreed with

– Camila Leite Contri

Disagreed on

Scope of regulatory intervention – targeted vs. comprehensive approach


Four enforcement cases already opened against Apple and Meta with significant fines imposed

Explanation

Despite being only two years old, the DMA has already resulted in active enforcement with multiple cases opened against major platforms. The European Commission has imposed substantial financial penalties for non-compliance with DMA obligations.


Evidence

Three cases against Apple (anti-steering/self-preferencing with 500 million fine, choice screen non-compliance, interoperability issues) and one against Meta (consent or pay model with 200 million fine)


Major discussion point

Digital Markets Act (DMA) and Competition Regulation


Topics

Economic | Legal and regulatory


Brazil’s PIX payment system demonstrates successful public alternative through mandatory interoperability requirements

Explanation

PIX serves as an exemplary case of how public digital infrastructure can effectively counterbalance market power when combined with proper regulation. The success came from forcing all companies to be interoperable with the public payment system, creating a real competitive constraint.


Evidence

PIX forced all digital and non-digital companies to be interoperable, providing a technical solution that was good, useful, and practical while being backed by regulation that made it a real alternative to private payment gatekeepers


Major discussion point

Alternative Platforms and Market Solutions


Topics

Economic | Infrastructure | Development


Agreed with

– Hannah Taieb

Agreed on

Interoperability is crucial for creating competitive alternatives to dominant platforms


Disagreed with

– Hannah Taieb

Disagreed on

Role of public vs. private solutions in addressing platform dominance


Open source solutions and public procurement requirements can promote alternatives to big tech dominance

Explanation

Governments can promote alternatives to big tech dominance by defaulting to open source solutions in government offices and extending these requirements to public procurement. This approach can help reduce dependency on proprietary platforms in critical infrastructure.


Evidence

Government offices should use open source by default and transmit this to public procurement requirements


Major discussion point

Alternative Platforms and Market Solutions


Topics

Economic | Legal and regulatory | Infrastructure


Agreed with

– Hannah Taieb

Agreed on

Technical solutions exist for ethical platform alternatives but require regulatory support


Cross-fertilization between DMA and DSA regulations through shared procedures and monitoring techniques

Explanation

The DMA and DSA regulations complement each other through shared procedural approaches and cross-learning between economic and fundamental rights enforcement. Competition law experience helps inform DSA procedures, while DSA algorithmic monitoring techniques can enhance DMA enforcement.


Evidence

DSA procedures are similar to competition cases, with DigiComp colleagues helping DigiConnect colleagues in investigations. Self-preferencing monitoring can be enhanced by DSA techniques for monitoring user harm


Major discussion point

Regulatory Coordination and Enforcement


Topics

Legal and regulatory | Human rights


Agreed with

– Camila Leite Contri

Agreed on

Cross-regulatory coordination between different legal frameworks is necessary


C

Camila Leite Contri

Speech speed

143 words per minute

Speech length

1800 words

Speech time

752 seconds

Brazil is discussing similar digital market regulations adapted to local context

Explanation

Brazil is currently discussing improvements to competition authority powers to deal with digital markets, though not exactly replicating the DMA. The discussion focuses on adapting successful DMA elements like data sharing limitations and consent-or-pay prohibitions to the Brazilian context.


Evidence

Brazil is developing ways to improve competition authority attribution for digital markets, potentially importing DMA concepts like limitations on data sharing and prohibition on pay-or-consent models


Major discussion point

Digital Markets Act (DMA) and Competition Regulation


Topics

Economic | Legal and regulatory


Digital platforms act as gatekeepers of human rights, with economic concentration directly impacting fundamental rights

Explanation

Major digital communication platforms function as gatekeepers of human rights because economic concentration in digital markets directly affects fundamental rights access. The concentration of economic power translates into control over how people exercise their rights in tech-mediated society.


Evidence

Society is tech-mediated, citizenship is tech-mediated, and monopolies/concentration of economic power are foundational to most digital rights issues


Major discussion point

Connection Between Economic Power and Human Rights


Topics

Human rights | Economic


Agreed with

– Bruno Carballa SmichoWSki
– Raquel da Cruz Lima

Agreed on

Economic concentration directly impacts human rights and requires integrated regulatory approaches


Disagreed with

– Bruno Carballa SmichoWSki
– Raquel da Cruz Lima

Disagreed on

Primary regulatory approach – economic vs. human rights focus


Zero rating practices in Brazil limit platform choice for lower-income users, concentrating discourse possibilities

Explanation

Zero rating practices in Brazil create artificial incentives for lower-income users to use only certain platforms, effectively concentrating discourse and limiting freedom of expression. Users with data caps naturally gravitate toward platforms that don’t consume their limited data allowance.


Evidence

IDEC research shows people with prepaid mobile data (4GB monthly caps) primarily use WhatsApp, Facebook, and TikTok because these don’t spend their mobile data, creating disincentives to use alternative platforms


Major discussion point

Connection Between Economic Power and Human Rights


Topics

Human rights | Economic | Development


Google’s political interference during Brazil’s fake news bill debate demonstrates how economic power translates to political influence

Explanation

Google’s intervention during Brazil’s Digital Services Act debate exemplifies how dominant platforms use their gatekeeper position to influence political discourse. By placing anti-bill messaging on their main search page and promoting sponsored links, Google shaped public debate on legislation that would regulate their own conduct.


Evidence

During the fake news bill vote, Google placed ‘How can the fake news bill worsen your internet?’ on their main page, directed users to anti-bill blog posts, and promoted ‘no to censorship bill’ sponsored links in search results


Major discussion point

Connection Between Economic Power and Human Rights


Topics

Human rights | Economic | Legal and regulatory


Human rights considerations should be integrated into competition law analysis and enforcement

Explanation

Competition authorities should adopt bolder theories of harm that incorporate human rights impacts, similar to how data protection violations can inform competition cases. This requires breaking down silos between different regulatory fields and recognizing their interconnected nature.


Evidence

EU Facebook case where German Competition Authority could consider data protection violations as competition breaches, with parameters for cooperation between different regulatory authorities


Major discussion point

Connection Between Economic Power and Human Rights


Topics

Human rights | Legal and regulatory | Economic


Agreed with

– Bruno Carballa SmichoWSki

Agreed on

Cross-regulatory coordination between different legal frameworks is necessary


Competition authorities need bolder theories of harm and should consider breaking up dominant companies

Explanation

Given the unmeasurable impact of big tech platforms on people’s lives, competition authorities should develop more aggressive enforcement approaches, including company breakups. Current theories of harm are insufficient to address the scale of platform dominance and its societal effects.


Evidence

Platforms have unmeasurable impact on lives, and breaking up companies could be a solution; Article 19’s unbundling proposal for Host Creation Services as an example


Major discussion point

Regulatory Coordination and Enforcement


Topics

Legal and regulatory | Economic


Disagreed with

– Bruno Carballa SmichoWSki

Disagreed on

Scope of regulatory intervention – targeted vs. comprehensive approach


Data protection violations can inform competition law enforcement as demonstrated in EU Facebook case

Explanation

The European Court of Justice established parameters for how competition authorities can consider data protection breaches within their competition analysis. This creates a framework for cross-regulatory enforcement that could extend to other areas like human rights.


Evidence

German Competition Authority case against Facebook went to ECJ, which ruled that competition authorities can interpret data protection violations as competition breaches, with specific cooperation procedures between different regulatory authorities


Major discussion point

Regulatory Coordination and Enforcement


Topics

Legal and regulatory | Human rights | Economic


Agreed with

– Bruno Carballa SmichoWSki

Agreed on

Cross-regulatory coordination between different legal frameworks is necessary


H

Hannah Taieb

Speech speed

136 words per minute

Speech length

1676 words

Speech time

736 seconds

Private companies can develop ethical, controllable recommendation algorithms while maintaining good user experience

Explanation

Companies like Speedio demonstrate that it’s possible to create recommendation algorithms that are ethical, controllable, and accessible while still providing good user experience. This challenges the narrative that effective personalization requires extensive data collection or unethical practices.


Evidence

Speedio specializes in commercialization of ethical recommendation algorithms, working with players like Claro Brazil, Sky, DirecTV, Canal+, Globo, TV5


Major discussion point

Alternative Platforms and Market Solutions


Topics

Economic | Human rights | Sociocultural


Agreed with

– Bruno Carballa SmichoWSki

Agreed on

Technical solutions exist for ethical platform alternatives but require regulatory support


Disagreed with

– Bruno Carballa SmichoWSki

Disagreed on

Role of public vs. private solutions in addressing platform dominance


Algorithm opacity and lack of contextual framing contributes to misinformation and undermines source visibility

Explanation

The shift from traditional media with clear editorial context to algorithm-driven feeds without transparent logic creates an environment where users cannot properly evaluate information sources. This lack of contextual understanding weakens users’ ability to detect bias and contributes to misinformation spread.


Evidence

Earlier generations encountered information with well-defined context (New York Times, BBC with known editorial lines), while today’s users get information through feeds where logic is invisible and intrusive, making media more ambient and anonymous


Major discussion point

Media, Information, and Algorithmic Transparency


Topics

Sociocultural | Human rights | Legal and regulatory


Traditional media faces economic pressures while creator economy and influencers gain dominance over trained journalists

Explanation

Traditional media organizations struggle with profitability as advertising budgets shift toward individual creators and influencers. This transformation raises concerns about access to reliable information, as creators with little journalistic background often receive more attention than trained professionals who verify and contextualize information.


Evidence

Traditional media increasingly financed by wealthy owners or public funding; advertisers’ budgets going toward influencers; creator economy becoming dominant force; branded content blurring lines between advertising and journalism


Major discussion point

Media, Information, and Algorithmic Transparency


Topics

Economic | Sociocultural | Human rights


Technical solutions exist for personalization without sensitive data collection, but stronger regulation needed

Explanation

From a technical standpoint, effective personalization and good user experience can be achieved while respecting GDPR and without using sensitive demographic data. The current reliance on extensive data collection is driven by advertising needs rather than user experience requirements.


Evidence

Personalization for user experience (not advertising) can be done while respecting GDPR; big tech integration of sensitive data for good user experience is not technically necessary – it’s for advertising purposes


Major discussion point

Alternative Platforms and Market Solutions


Topics

Human rights | Legal and regulatory | Economic


Agreed with

– Bruno Carballa SmichoWSki

Agreed on

Technical solutions exist for ethical platform alternatives but require regulatory support


Transparency in algorithmic logic essential for building user trust and enabling informed choices

Explanation

Users need to understand the logic behind algorithmic recommendations to make informed decisions about their media consumption. This transparency is crucial for building trust and enabling users to choose platforms that align with their values and needs.


Major discussion point

Media, Information, and Algorithmic Transparency


Topics

Human rights | Sociocultural | Legal and regulatory


Interoperability and algorithm pluralism needed for better information distribution

Explanation

Forcing greater interoperability and promoting algorithm pluralism would improve information distribution and reduce platform monopolization. The technical solutions exist, but implementation requires regulatory intervention and willingness from big tech companies to comply.


Evidence

Technical solutions exist for interoperability and open APIs; it’s not a matter of technicalities but of regulation and good willing from big tech


Major discussion point

Media, Information, and Algorithmic Transparency


Topics

Legal and regulatory | Infrastructure | Human rights


Agreed with

– Bruno Carballa SmichoWSki

Agreed on

Interoperability is crucial for creating competitive alternatives to dominant platforms


J

Jacques Peglinger

Speech speed

124 words per minute

Speech length

118 words

Speech time

57 seconds

Local champions and national platforms need different regulatory approaches than international gatekeepers

Explanation

The DMA addresses European-wide big platforms but doesn’t adequately address local champions that may dominate specific national markets. This raises questions about how different jurisdictions should handle platforms that are dominant locally but don’t meet international gatekeeper thresholds.


Evidence

DMA targets European-wide platforms due to size thresholds, but fragmented local markets may have local champions that need different regulatory treatment


Major discussion point

Digital Markets Act (DMA) and Competition Regulation


Topics

Economic | Legal and regulatory


R

Raquel da Cruz Lima

Speech speed

156 words per minute

Speech length

1450 words

Speech time

555 seconds

States have constitutional duty to consider human rights in all regulatory decisions including competition matters

Explanation

All state authorities, including competition regulators, have a constitutional obligation to consider international human rights treaties in their decision-making processes. This duty of conventionality control means human rights should be integrated into competition law analysis and enforcement.


Evidence

International community has long discussed duty of control or conventionality by every member of the state; whatever their conduct, they have duty to consider international treaties ratified by states


Major discussion point

Connection Between Economic Power and Human Rights


Topics

Human rights | Legal and regulatory


Agreed with

– Bruno Carballa SmichoWSki
– Camila Leite Contri

Agreed on

Economic concentration directly impacts human rights and requires integrated regulatory approaches


Disagreed with

– Bruno Carballa SmichoWSki
– Camila Leite Contri

Disagreed on

Primary regulatory approach – economic vs. human rights focus


A

Audience

Speech speed

139 words per minute

Speech length

680 words

Speech time

291 seconds

Infrastructure development and funding for digital public alternatives essential for Global South countries

Explanation

Global South countries face challenges in developing protagonism in digital markets due to lack of infrastructure to create their own platforms and services. Even when alternatives exist, the dominance of platforms like Google makes it difficult to gain traction without adequate infrastructure support.


Evidence

Question from Laura about how Global South can increase protagonism when lacking infrastructure for own social media/platforms while facing monopolization from Google and Meta


Major discussion point

Alternative Platforms and Market Solutions


Topics

Development | Infrastructure | Economic


Users need incentives and alternatives to reduce dependence on big tech platforms despite network effects

Explanation

Even when regulations create alternatives to big tech services, users face practical challenges in switching due to network effects and convenience factors. Overcoming these obstacles requires addressing both institutional arrangements and practical incentives for users to adopt alternative platforms.


Evidence

Question from João about how to overcome obstacles regarding incentives to users, noting that although alternatives might be available, it’s sometimes easier to use big tech services


Major discussion point

Alternative Platforms and Market Solutions


Topics

Economic | Sociocultural | Human rights


Civil society organizations need empowerment to participate in market-oriented regulatory discussions

Explanation

There’s a need to empower human rights organizations and civil society groups to engage meaningfully in market-oriented aspects of platform regulation. This requires bridging the gap between economic regulation and human rights advocacy to create more holistic platform governance approaches.


Evidence

Question from Beatriz about empowering organizations to join economic-oriented regulation conversations and connecting platform governance perspectives beyond just market/economic focus


Major discussion point

Regulatory Coordination and Enforcement


Topics

Human rights | Legal and regulatory | Economic


Dialogue between different regulatory disciplines and jurisdictions essential for effective platform governance

Explanation

Effective platform regulation requires coordination between different regulatory approaches (data protection, competition, human rights) and learning between jurisdictions. This interdisciplinary dialogue is crucial for addressing the complex challenges posed by platform dominance.


Evidence

Discussion about connecting data protection considerations with competition law, and learning between different jurisdictional approaches to platform regulation


Major discussion point

Regulatory Coordination and Enforcement


Topics

Legal and regulatory | Human rights | Economic


Agreements

Agreement points

Economic concentration directly impacts human rights and requires integrated regulatory approaches

Speakers

– Bruno Carballa SmichoWSki
– Camila Leite Contri
– Raquel da Cruz Lima

Arguments

DMA targets gatekeepers with economic objectives to reduce market power and prevent abuse


Digital platforms act as gatekeepers of human rights, with economic concentration directly impacting fundamental rights


States have constitutional duty to consider human rights in all regulatory decisions including competition matters


Summary

All speakers agree that economic power concentration in digital markets has direct implications for human rights, and that regulatory approaches should acknowledge this connection even when primarily focused on economic objectives


Topics

Human rights | Economic | Legal and regulatory


Technical solutions exist for ethical platform alternatives but require regulatory support

Speakers

– Bruno Carballa SmichoWSki
– Hannah Taieb

Arguments

Open source solutions and public procurement requirements can promote alternatives to big tech dominance


Private companies can develop ethical, controllable recommendation algorithms while maintaining good user experience


Technical solutions exist for personalization without sensitive data collection, but stronger regulation needed


Summary

Both speakers acknowledge that technical solutions for more ethical platform alternatives already exist, but successful implementation requires supportive regulatory frameworks and policy interventions


Topics

Economic | Legal and regulatory | Infrastructure


Interoperability is crucial for creating competitive alternatives to dominant platforms

Speakers

– Bruno Carballa SmichoWSki
– Hannah Taieb

Arguments

Brazil’s PIX payment system demonstrates successful public alternative through mandatory interoperability requirements


DMA creates new obligations for platforms including allowing business outside platforms, data access, app uninstallation, and interoperability


Interoperability and algorithm pluralism needed for better information distribution


Summary

Both speakers emphasize that mandatory interoperability requirements are essential for breaking platform monopolies and creating viable alternatives, as demonstrated by successful cases like Brazil’s PIX system


Topics

Economic | Infrastructure | Legal and regulatory


Cross-regulatory coordination between different legal frameworks is necessary

Speakers

– Bruno Carballa SmichoWSki
– Camila Leite Contri

Arguments

Cross-fertilization between DMA and DSA regulations through shared procedures and monitoring techniques


Human rights considerations should be integrated into competition law analysis and enforcement


Data protection violations can inform competition law enforcement as demonstrated in EU Facebook case


Summary

Both speakers advocate for breaking down regulatory silos and creating coordination mechanisms between different legal frameworks (competition, data protection, human rights) to address platform dominance comprehensively


Topics

Legal and regulatory | Human rights | Economic


Similar viewpoints

Current regulatory approaches are insufficient and need to be more aggressive, while also ensuring broader participation from civil society in shaping these approaches

Speakers

– Camila Leite Contri
– Audience

Arguments

Competition authorities need bolder theories of harm and should consider breaking up dominant companies


Civil society organizations need empowerment to participate in market-oriented regulatory discussions


Topics

Legal and regulatory | Economic | Human rights


Platform opacity and algorithmic control enable manipulation of information and political discourse, demonstrating how technical design choices have political consequences

Speakers

– Hannah Taieb
– Camila Leite Contri

Arguments

Algorithm opacity and lack of contextual framing contributes to misinformation and undermines source visibility


Google’s political interference during Brazil’s fake news bill debate demonstrates how economic power translates to political influence


Topics

Human rights | Sociocultural | Economic


Public digital infrastructure can effectively compete with private platforms when properly designed and regulated, and this approach is particularly important for Global South development

Speakers

– Bruno Carballa SmichoWSki
– Audience

Arguments

Brazil’s PIX payment system demonstrates successful public alternative through mandatory interoperability requirements


Infrastructure development and funding for digital public alternatives essential for Global South countries


Topics

Development | Infrastructure | Economic


Unexpected consensus

Need for bolder regulatory enforcement including potential company breakups

Speakers

– Bruno Carballa SmichoWSki
– Camila Leite Contri

Arguments

Four enforcement cases already opened against Apple and Meta with significant fines imposed


Competition authorities need bolder theories of harm and should consider breaking up dominant companies


Explanation

Unexpected because Bruno represents the European Commission (regulatory authority) while Camila represents civil society, yet both acknowledge that current enforcement may need to be more aggressive, including considering company breakups


Topics

Legal and regulatory | Economic


Technical feasibility of ethical alternatives without compromising user experience

Speakers

– Hannah Taieb
– Bruno Carballa SmichoWSki

Arguments

Technical solutions exist for personalization without sensitive data collection, but stronger regulation needed


Open source solutions and public procurement requirements can promote alternatives to big tech dominance


Explanation

Unexpected consensus between a business representative and a regulatory researcher that ethical alternatives are technically viable and don’t require sacrificing user experience, challenging industry narratives about necessary trade-offs


Topics

Economic | Legal and regulatory | Human rights


Overall assessment

Summary

Strong consensus emerged around the interconnection between economic concentration and human rights impacts, the technical feasibility of ethical alternatives, the importance of interoperability, and the need for cross-regulatory coordination


Consensus level

High level of consensus with significant implications for policy development. The agreement between regulatory, business, and civil society perspectives suggests a mature understanding of platform governance challenges and potential solutions. This consensus could facilitate more integrated policy approaches that address both economic and human rights concerns simultaneously.


Differences

Different viewpoints

Primary regulatory approach – economic vs. human rights focus

Speakers

– Bruno Carballa SmichoWSki
– Camila Leite Contri
– Raquel da Cruz Lima

Arguments

DMA targets gatekeepers with economic objectives to reduce market power and prevent abuse


Digital platforms act as gatekeepers of human rights, with economic concentration directly impacting fundamental rights


States have constitutional duty to consider human rights in all regulatory decisions including competition matters


Summary

Bruno emphasizes DMA’s purely economic objectives with indirect human rights effects, while Camila and Raquel argue for direct integration of human rights considerations into competition law and regulatory frameworks.


Topics

Legal and regulatory | Human rights | Economic


Scope of regulatory intervention – targeted vs. comprehensive approach

Speakers

– Bruno Carballa SmichoWSki
– Camila Leite Contri

Arguments

DMA creates new obligations for platforms including allowing business outside platforms, data access, app uninstallation, and interoperability


Competition authorities need bolder theories of harm and should consider breaking up dominant companies


Summary

Bruno supports targeted regulatory obligations for gatekeepers, while Camila advocates for more aggressive intervention including company breakups as necessary solutions.


Topics

Legal and regulatory | Economic


Role of public vs. private solutions in addressing platform dominance

Speakers

– Bruno Carballa SmichoWSki
– Hannah Taieb

Arguments

Brazil’s PIX payment system demonstrates successful public alternative through mandatory interoperability requirements


Private companies can develop ethical, controllable recommendation algorithms while maintaining good user experience


Summary

Bruno emphasizes public infrastructure solutions like PIX as effective alternatives, while Hannah focuses on private sector innovation and ethical algorithm development as viable market solutions.


Topics

Economic | Infrastructure | Alternative Platforms and Market Solutions


Unexpected differences

Effectiveness of current regulatory timeline and enforcement speed

Speakers

– Bruno Carballa SmichoWSki
– Camila Leite Contri

Arguments

Four enforcement cases already opened against Apple and Meta with significant fines imposed


Competition authorities need bolder theories of harm and should consider breaking up dominant companies


Explanation

Unexpectedly, Bruno presents DMA enforcement as relatively successful with four cases and significant fines in just two years, while Camila argues this approach is insufficient and calls for much more aggressive action including breakups. This suggests a fundamental disagreement about whether current regulatory pace is adequate.


Topics

Legal and regulatory | Economic


Overall assessment

Summary

The main areas of disagreement center on regulatory philosophy (economic vs. human rights focus), intervention intensity (targeted obligations vs. company breakups), and solution approaches (public infrastructure vs. private innovation). Despite shared concerns about platform dominance, speakers differ significantly on implementation strategies.


Disagreement level

Moderate to high disagreement on methods and approaches, but strong consensus on the fundamental problem of platform dominance. The disagreements reflect different professional backgrounds and jurisdictional perspectives, which could complicate coordinated global responses but also provide diverse policy options for different contexts.


Partial agreements

Partial agreements

Similar viewpoints

Current regulatory approaches are insufficient and need to be more aggressive, while also ensuring broader participation from civil society in shaping these approaches

Speakers

– Camila Leite Contri
– Audience

Arguments

Competition authorities need bolder theories of harm and should consider breaking up dominant companies


Civil society organizations need empowerment to participate in market-oriented regulatory discussions


Topics

Legal and regulatory | Economic | Human rights


Platform opacity and algorithmic control enable manipulation of information and political discourse, demonstrating how technical design choices have political consequences

Speakers

– Hannah Taieb
– Camila Leite Contri

Arguments

Algorithm opacity and lack of contextual framing contributes to misinformation and undermines source visibility


Google’s political interference during Brazil’s fake news bill debate demonstrates how economic power translates to political influence


Topics

Human rights | Sociocultural | Economic


Public digital infrastructure can effectively compete with private platforms when properly designed and regulated, and this approach is particularly important for Global South development

Speakers

– Bruno Carballa SmichoWSki
– Audience

Arguments

Brazil’s PIX payment system demonstrates successful public alternative through mandatory interoperability requirements


Infrastructure development and funding for digital public alternatives essential for Global South countries


Topics

Development | Infrastructure | Economic


Takeaways

Key takeaways

Economic concentration in digital markets directly impacts human rights, particularly freedom of expression and access to information


The EU’s Digital Markets Act (DMA) provides a regulatory model that other jurisdictions like Brazil can adapt, focusing on preventing gatekeeper platforms from abusing their market power


Technical solutions exist for ethical algorithms and personalization without extensive data collection, but stronger regulation and incentives are needed to implement them


Public alternatives like Brazil’s PIX payment system can successfully challenge big tech dominance when combined with mandatory interoperability requirements


Cross-disciplinary dialogue between competition law, human rights, and technology experts is essential for effective platform governance


Traditional media faces economic pressures while algorithm-driven platforms concentrate information distribution, requiring new business models that prioritize transparency and user control


Resolutions and action items

Civil society organizations should learn market language and engage more actively in competition law discussions


Competition authorities should adopt bolder theories of harm and consider breaking up dominant companies


Governments should promote open source alternatives through public procurement requirements


Brazil should focus on developing digital public infrastructure and cloud alternatives to reduce dependency on big tech


Regulators should integrate human rights considerations into competition law analysis and enforcement


Stronger regulation needed to require algorithmic transparency and interoperability


Article 19’s policy paper ‘Taming the Big Tech’ should be consulted for unbundling solutions for social media platforms


Unresolved issues

How to overcome network effects that keep users on dominant platforms despite availability of alternatives


How Global South countries can develop technological infrastructure to compete with established gatekeepers


What specific incentive structures would effectively encourage users to adopt alternative platforms


How to balance the need for platform regulation with concerns about fragmenting the digital public sphere


What institutional design works best – separate regulations for economic and human rights issues versus integrated approaches


How to address the revolving door problem between government and big tech companies


What funding mechanisms can support ethical tech vendors and open source solutions at scale


Suggested compromises

Jurisdictional learning approach where countries can be ‘second movers’ and learn from EU’s DMA implementation mistakes and successes


Layered approach to alternatives – public infrastructure for some services, open source for government use, and regulation for private markets


Cross-fertilization between different regulatory frameworks (DMA and DSA) sharing procedures and monitoring techniques while maintaining distinct objectives


Cooperation between data protection and competition authorities to address overlapping concerns without conflicting decisions


Supporting both public alternatives and private ethical tech vendors rather than choosing one approach exclusively


Contextual advertising models that provide transparency about ad targeting while maintaining media funding mechanisms


Thought provoking comments

We currently have a society that is tech-mediated, our citizenship is tech-mediated… in Brazil we still have this zero rating practices, in which people that use prepaid mobile data… they mostly use the platform, the applications that don’t spend their mobile cap. So, we currently have people that have, for example, per month, four gigabytes, and WhatsApp, Facebook, and TikTok don’t spend internet, so why would someone have an incentive to use another platform, and how this is important to how the debate is developed on how people express themselves.

Speaker

Camila Leite Contri


Reason

This comment brilliantly connects economic inequality to digital rights violations, showing how market structures create barriers to free expression for lower-income populations. It demonstrates how seemingly neutral business practices (zero rating) actually entrench platform monopolies and limit democratic discourse.


Impact

This shifted the discussion from abstract regulatory concepts to concrete examples of how economic concentration affects human rights in practice. It grounded the theoretical framework in real-world inequality and influenced subsequent discussions about alternatives and infrastructure needs in the Global South.


During the week of the votation of the Brazilian DSA… Google put in the main website… ‘how can the bill, the fake news bill can worsen your internet?’… And when would you search for a fake news bill? The first link that would appear would be a sponsored link by Google saying no to censorship bill. So how can we have a free space of debating when the… basically the only search platform that people use in practice put this and change the whole debate.

Speaker

Camila Leite Contri


Reason

This example powerfully illustrates how economic dominance translates into political power, showing concrete evidence of how platforms can manipulate democratic processes. It demonstrates the concept of ‘political self-preferencing’ – extending the DMA’s economic self-preferencing rules into the political sphere.


Impact

This comment introduced a new dimension to the discussion by showing how competition law violations can directly undermine democratic processes. It elevated the conversation from market efficiency concerns to fundamental questions about democracy and political manipulation, influencing how other panelists framed the urgency of regulation.


I always felt kind of isolated in both fields, both in competition law, where you’re talking about human rights in the digital sphere, and both in civil society, in the human rights side, talking within the language of market… monopoly competition issues are key to human rights and we should analyze them together.

Speaker

Camila Leite Contri


Reason

This meta-commentary on the artificial separation between competition law and human rights advocacy identified a crucial structural problem in how these issues are typically addressed. It challenged the siloed approach that weakens both fields.


Impact

This comment set the tone for the entire discussion by explicitly calling for interdisciplinary dialogue. It validated the workshop’s premise and encouraged other participants to think beyond their traditional disciplinary boundaries, leading to more integrated analysis throughout the session.


The way personalization is done… largely by collecting private data with no or little regard for actual transparency or contextual understanding… as earlier generations were encountering information with very well-defined context in environments such as newspapers… today, many young users, the only information they encounter comes through feeds, where logic is invisible and intrusive.

Speaker

Hannah Taieb


Reason

This insight connected the loss of editorial context to the rise of algorithmic curation, showing how the shift from traditional media to platform-mediated information fundamentally changes how citizens engage with information and democracy.


Impact

This comment deepened the discussion by introducing the concept of ‘contextual framing’ as a democratic necessity. It influenced the conversation toward solutions focused on transparency and alternative business models, and connected technical algorithmic issues to broader questions about informed citizenship.


PIX… can counterbalance market power in a very strong way. But of course, these have to come with a proper regulation that makes them a real alternative… we forced all the companies, digital and non-digital, to be interoperable with PIX.

Speaker

Bruno Carballa SmichoWSki


Reason

This example provided a concrete model for how public digital infrastructure can successfully challenge private platform dominance, showing that alternatives are possible when combined with smart regulation requiring interoperability.


Impact

This comment shifted the discussion from purely regulatory approaches to hybrid public-private solutions. It gave concrete hope to participants from the Global South who were asking about alternatives to Big Tech dominance, and influenced the conversation toward practical policy solutions rather than just theoretical frameworks.


Why can’t we understand the human rights impact and bring this into competition authorities? But this demands bold public servants… I hope other authorities, such as the Brazilian ones, have this same openness.

Speaker

Camila Leite Contri


Reason

This direct challenge to regulatory authorities to expand their mandate and consider human rights impacts in competition cases was both a call to action and a recognition that institutional change requires individual courage within bureaucratic systems.


Impact

This comment personalized the regulatory challenge and created a direct dialogue between civil society and regulatory officials (Bruno). It moved the discussion from abstract policy to the human agency required for institutional change, and set up a framework for ongoing collaboration between different stakeholder groups.


Overall assessment

These key comments fundamentally shaped the discussion by breaking down artificial barriers between economic and human rights analysis. Camila’s interventions were particularly transformative, providing concrete examples that grounded theoretical concepts in lived experience and democratic practice. Her comments about zero-rating and Google’s political interference demonstrated how market concentration directly undermines human rights, while her call for interdisciplinary dialogue set the collaborative tone for the entire session. Hannah’s insights about algorithmic curation and contextual framing added crucial technical depth, showing how business model changes could support democratic values. Bruno’s PIX example provided hope and practical direction for Global South participants. Together, these comments created a conversation that was both analytically rigorous and practically oriented, successfully bridging the gap between competition law, human rights advocacy, and technical innovation. The discussion evolved from separate disciplinary perspectives to an integrated framework for understanding digital platform power as fundamentally both an economic and democratic challenge.


Follow-up questions

Should generative AI be included under a new category in the DMA or does it fit into existing categories like search engines?

Speaker

Bruno Carballa SmichoWSki


Explanation

This is an ongoing discussion about how to regulate emerging AI technologies within the existing DMA framework, which is crucial for determining regulatory scope and enforcement.


How can the Global South increase protagonism in competition scenarios when lacking infrastructure to create alternatives to dominant platforms?

Speaker

Laura (audience member from Brazil)


Explanation

This addresses the fundamental challenge of developing competitive alternatives in regions with limited technological infrastructure and resources.


How can incentives to use big tech services be diminished when alternatives are available but big tech platforms remain easier to use?

Speaker

João (audience member from Brazil)


Explanation

This explores the practical challenge of user adoption of alternatives despite regulatory changes that create more options.


How is Brazil handling local champions, and are there nationwide platforms that could be considered local champions?

Speaker

Jacques Peglinger


Explanation

This examines how competition policy addresses domestic market leaders versus international platforms, which is important for understanding comprehensive market regulation.


How can human rights organizations be empowered to join economic-oriented regulatory conversations about platform governance?

Speaker

Beatriz (University of Sussex)


Explanation

This addresses the need for interdisciplinary collaboration between human rights advocates and competition/market regulators for more holistic platform governance.


What are the priorities in terms of regulations to increase competition and create markets more respectful of human rights?

Speaker

Raquel da Cruz Lima


Explanation

This seeks to identify the most important regulatory interventions needed to achieve both competitive markets and human rights protection.


Are there changes in breaking up the advertising market concentration and making it more aligned with human rights?

Speaker

Raquel da Cruz Lima


Explanation

This examines whether regulatory efforts are successfully addressing the concentrated advertising market dominated by major platforms.


How can competition authorities develop bolder theories of harm to address the unmeasurable impact of big tech on society?

Speaker

Camila Leite Contri


Explanation

This explores how competition law enforcement could be strengthened to better address the broader societal impacts of platform dominance beyond traditional economic harms.


How can Brazil focus on data sovereignty and address concerns about revolving doors between government and big tech cloud companies?

Speaker

Camila Leite Contri


Explanation

This addresses the need to examine conflicts of interest and dependency issues in critical digital infrastructure decisions.


How can digital public infrastructures be funded and alternatives created from both small companies and the public sector?

Speaker

Camila Leite Contri


Explanation

This explores practical mechanisms for developing competitive alternatives through public investment and support for smaller market players.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Host Country Open Stage

Session at a glance

Summary

This discussion presents a historical overview of Norway’s pioneering role in Internet development and its vision for the future, delivered as an opening session at the Internet Governance Forum (IGF). Host Adelie Dorseuil guides five Norwegian Internet experts through the country’s journey from establishing one of the world’s first Internet connections via ARPANET in 1973 to becoming a global leader in digital innovation. Harald Alvestrand explains how Norway’s early adoption of Internet technology was driven by the cultural desire to connect people and solve problems collaboratively, particularly through the university network UNINET in 1993. Kristin Braa describes how Internet technology revolutionized healthcare systems in the Global South, starting with post-apartheid South Africa and eventually scaling to 80 countries through mobile Internet and open-source platforms. Josef Noll discusses the Basic Internet Foundation’s “Walk on the Internet” initiative, which aims to provide affordable Internet access to underserved communities, particularly in Africa where 75% of people still lack mobile broadband access. Kjetil Kjernsmo presents a more critical perspective, arguing that while Internet technology has enabled remarkable innovation, it has also created threats to democracy and requires new institutions with democratic mandates rather than purely commercial ones. Linda Firveld emphasizes that Internet access has become essential infrastructure in Norway, with near 100% coverage making it the fourth pillar of household utilities alongside electricity and water. The discussion concludes with each expert offering a single word representing the future focus: “connected,” “unite,” “make it happen,” “digital commons,” and “democracy.” This session serves as both a celebration of Norway’s Internet heritage and a call to action for creating more inclusive, democratic digital futures globally.


Keypoints

**Major Discussion Points:**


– **Norway’s pioneering role in Internet history** – The discussion traces Norway’s early adoption and contributions to Internet technology, from the first ARPANET connection in 1973 to becoming the first country to divert all traffic through the Internet in 2022


– **Global scaling and accessibility of Internet technology** – Speakers discussed how Internet technology has revolutionized connectivity in the Global South, particularly through mobile Internet adoption in Africa and the development of health information systems across 80+ countries


– **Inclusive Internet access models** – The conversation addressed the challenge that 75% of people in sub-Saharan Africa still don’t use mobile broadband due to cost, and explored innovative models like “The Walk on the Internet” initiative to make connectivity more accessible


– **Democracy and institutional challenges in the digital age** – Discussion of how current technology has created “cracks in the fabric of democracy” and the need for new institutions with democratic mandates rather than purely commercial ones to govern social media and digital infrastructure


– **Future vision for Internet governance and digital commons** – Speakers shared their vision for the future, emphasizing the need for digital commons, democratic institutions, and continued global connectivity as foundational elements


**Overall Purpose:**


The discussion served as an opening session for the Internet Governance Forum (IGF) 2025, using Norway’s Internet history as a framework to introduce key themes and challenges in global Internet governance. The speakers aimed to set the stage for conference discussions by highlighting both technological achievements and ongoing challenges in making the Internet accessible and democratic worldwide.


**Overall Tone:**


The tone was consistently optimistic and forward-looking throughout the conversation. It began with pride and celebration of Norwegian technological achievements, maintained an enthusiastic and collaborative spirit when discussing global initiatives, and concluded with hopeful calls to action for the future. The speakers demonstrated expertise while remaining accessible, and the moderator maintained an engaging, informative presentation style that effectively transitioned the audience from this historical overview into the main conference proceedings.


Speakers

– **Adelie Dorseuil**: Moderator/Host of the Open Stage session at IGF


– **Harald Alvestrand**: Internet expert with 40 years of experience, former chair of the IETF, played important role in UNINET (university networks)


– **Kristin Braa**: Expert involved in health information systems development, worked on post-apartheid health sector reconstruction project starting in 1994, involved in scaling health platforms globally


– **Josef Noll**: Associated with Basic Internet Foundation, works on inclusive internet connectivity models, involved in “The Walk on the Internet” initiative


– **Kjetil Kjernsmo**: Known as “Dr. Internet Enthusiast,” editor of the Solid Specification (Tim Berners-Lee’s project), expert in social and web technologies


– **Linda Firveld**: Leader of a tech company working with broadband providers, expert in Wi-Fi and broadband access


Additional speakers:


None identified beyond the provided speakers names list.


Full session report

# Norway’s Internet Journey: From Pioneer to Global Leader – A Discussion Summary


## Introduction and Context


This opening session at the Internet Governance Forum featured five Norwegian Internet experts discussing the nation’s journey from early Internet adoption to digital leadership. Moderated by Adelie Dorseuil, the panel included Harald Alvestrand (Internet veteran and former IETF chair), Kristin Braa (health information systems specialist), Josef Noll (Basic Internet Foundation), Kjetil Kjernsmo (“Dr. Internet Enthusiast” and Solid Specification editor), and Linda Firveld (broadband infrastructure specialist).


## Norway’s Internet History


### Early Adoption Timeline


Adelie Dorseuil outlined Norway’s pioneering Internet milestones: the first ARPANET connection in 1973, Norwegian TDMA technology becoming the foundation for GSM in 1986, development of the Opera browser in the early 1990s (still the leading browser in Africa), and presenting the Winter Olympics online in 1993-1994. Norway became the first country to transition all traffic through the Internet by 2022, supported by near 100% Internet coverage.


### Cultural Foundation for Success


Harald Alvestrand explained that Norwegian universities drove early adoption through the UNINET network, reflecting cultural values that aligned with Internet principles. “The Internet culture aligns with Norwegian values of interconnection and people-level problem solving,” he observed, suggesting Norway’s success was fundamentally cultural rather than merely technological.


## Global Impact and Development


### Health Systems Revolution


Kristin Braa demonstrated the Internet’s transformative impact in developing regions through health information systems. Beginning with post-apartheid South Africa’s health sector reconstruction in 1994, her work with the DHIS2 platform scaled to over 80 countries. “The Internet revolutionised the Global South through health information systems,” she explained, noting that after 2010, mobile Internet enabled countries to “leapfrog the fixed net, totally no fixed net, no fixed phone, only mobile Internet.”


### Persistent Access Barriers


Josef Noll highlighted ongoing challenges, pointing out that “75% of people in sub-Saharan Africa still don’t use mobile broadband due to cost.” He questioned whether current models are truly inclusive and proposed a “road model” where “digital infrastructure should follow a road model where basic connectivity enables free access for digital pedestrians and cyclists.”


## Infrastructure and Democratic Governance


### Internet as Essential Infrastructure


Linda Firveld described “Internet and home Wi-Fi as the fourth pillar of infrastructure alongside electricity and water,” noting that people now expect connectivity to “just work.” She observed the dramatic increase in connected devices in Norwegian homes and anticipated “a service economy as the fifth infrastructure pillar.”


### Technology and Democratic Challenges


Kjetil Kjernsmo introduced critical concerns about technology’s impact on democracy, arguing that “current technology has created cracks in the fabric of democracy due to lack of proper institutional development.” He referenced Norway’s 2004 constitutional change requiring authorities to “create conditions that facilitate open and enlightened public discourse,” but identified a “sin of omission” that “the state did not immediately realise that this means we need to build new institutions.”


Kjernsmo argued that social media platforms “function as infrastructure for public discourse, relationships, and commerce requiring democratic oversight” and called for “institutions that develop technology with a democratic mandate rather than just a commercial one.”


## Key Themes and Perspectives


The discussion revealed both consensus and tension among speakers. All agreed on the importance of universal access and technology serving democratic purposes, but differed in their assessments of current progress. While Braa emphasized successful scaling in health systems, Noll highlighted persistent cost barriers. Alvestrand’s optimistic view of Internet culture contrasted with Kjernsmo’s concerns about democratic institutions.


Harald Alvestrand emphasized the need for “investment in growing the next generation of Internet leaders beyond the first generation,” while Kjernsmo focused on creating “open global digital common space ecosystems” as alternatives to commercial platforms.


## Future Vision


The session concluded with speakers offering words for the future, though their responses were cut short in the recording. The discussion successfully used Norway’s experience to introduce key themes in global Internet governance: the tension between technological achievement and persistent inequalities, the challenge of democratic governance in digital spaces, and the need for new institutional approaches.


The conversation demonstrated that while significant progress has been made in Internet development, fundamental challenges remain in ensuring inclusive access and democratic governance of digital infrastructure and platforms.


Session transcript

Adelie Dorseuil: Good morning, and welcome to Open Stage here at IGF, and for many of you, welcome to Norway. I would like to take you as a sort of a teaser for the conference to a little travel back in time. Please come along to a quick history of the Internet from a Norwegian perspective. With me, I’ve invited five experts who were part of the Norwegian Internet history so they could share some insight. Because one thing you need to know about Norwegians is that we tend to be early adopters of things. Like that includes electric cars, AI, sushi, and also the Internet. The very first Internet connection in Norway was already in 1973 through the then ARPANET that was Internet’s first experiment. This was ten years before what many considered to be the birthday of the Internet in 1983. It happened in a building affectionately called The Basement, Kjeller in Norwegian. I highly recommend that you join the tour of Kjeller on Tuesday and Thursday at 5 p.m. Another thing about Norway, besides being early adopters, is that we like remote places. Very remote places. Like Svalbard. Anybody heard about it? 3,000 people, 3,000 polar bears. But yeah, in 1974, what they really needed was a really early satellite connection. We weren’t just early adopters. We were also active contributors. In 1986, the Norwegian model of TDMA, narrow band TDMA, performed better than the other available technology. And it became the basis for the GSM system, which was established as a European effort. In 1993, one of the leading fields for early use of Internet standards was universities, with our very own network, UNINET. Harald, you played an important role in UNINET. Can you tell us what drove the university to adopt the use of the Internet? What was the motivation?


Harald Alvestrand: So, it’s been a while. I found out I’ve been working with the Internet for 40 years now, since before the mobile phone system existed. That’s kind of strange. So part of it was spent at UNINET, connecting the university networks. Part of it was spent as being chair of the ITF. And it’s been a continuous journey, where I’ve encountered a lot of people. And what they have in common is the desire to connect, the desire to enable people to communicate with each other, so that the Internet should be for everyone. So this fits very well with the Norwegian culture, because the Norwegian culture is all about interconnecting, about solving people, solving problems at the people level, getting the people who can really solve the problem to talk to each other. And thus the Internet became so important for the community. But engaging takes time. I’ve been privileged to work for this Internet thing for 40 years, to various employers. And in the future, we have to also remember to invest, to grow the people who will take over from the first generation to be on the Internet.


Adelie Dorseuil: Thank you. The year after, in 1993, 1994, we developed the first web browser for mobile phone, the Opera browser, which is still the leading browser in Africa. We also presented the Winter Olympics online and established a wide-reaching health information system platform. Kristin, you were a part of this journey. Can you tell us about ensuring a successful scale-up?


Kristin Braa: So this is not really about Norway, but it’s about Global South. Internet has been a revolution for the Global South, especially when it comes to scaling. But it all started in 1994 as a post-apartheid project. It started as an action research project, reconstructing health sector after apartheid. After having 14 Department of Health, all the data coming up to the National, then of course it was a revolution to get access to your data for decisions in health at the district level. That’s why the DHS too. So then we could say we have been traveling through geography and technology, starting with floppy disks, USB sticks, attachment of emails, to web, mobile Internet. And when the cable came through Africa, building up through Africa, then the whole Africa was then able to utilize mobile Internet on the fly. So that, of course, was extremely important for scaling. So being able to utilize and leapfrog the fixed net, totally no fixed net, no fixed phone, only mobile Internet. So then we could be able to scale in Kenya as the first sub-Saharan Africa, totally national scale health information system, reaching out to all the districts in the whole Kenya. That was a revolution in 2010. And then aspired through this, coordinated from Universal also, this digital open source health platform, becoming a platform, utilizing all the technology in order to be able to scale. That inspired Ghana, Tanzania, the rest of East Africa, and then also to India. So ending up to being 80 countries using a national health information system, but however for the NGOs and MSF and the Red Cross and all, it’s 130 countries that are using DHS2 as a system. Stop.


Adelie Dorseuil: And from then on, we kept on going. We even caught a world record along the way in 1999. We also participated in the development of portable hardware in 2005. But yet another thing to know about Norway is that we like to share, especially something as powerful as the Internet. And in 2014, the Basic Internet Foundation was created. Josef, what can you tell us about their initiative called The Walk on the Internet?


Josef Noll: Thanks so much, Adelie. Despite what Christian said, we still have 75% of people in Africa, south of Sahara who don’t use mobile broadband because it’s too expensive. So we should ask ourselves, are the models which we are using, inclusive models, are the models there to get everyone included? And that’s what we asked us to first solve, can we go out where nobody believes that you can connect? Yes, we can connect. And then the second step was, how can we ensure that everyone is with us? And that’s where we adopted the model of the road, of saying that, well, I need someone building a road, but once the road is built, digital pedestrians and digital cyclists can use the road for free. And those of you who know what these shoes are made of, they are from my friends from Kenya, from the Maasai. After we connected them, they gave me these old tires. And those tires are the tires which are now the shoes for The Walk on the Internet. Thanks.


Adelie Dorseuil: Thank you. From then on, we kept going and started to look for places that were as remote as Svalbard, but with fewer polar bears. And like the Maasai village of Silila in Africa, this was all part of a broader futuristic vision that we hope to bring here today to IGF. Kjetil, I’ve been told your nickname is Dr. Internet Enthusiast. What is your enthusiastic vision for the future? What do you think we should discuss here at IGF?


Kjetil Kjernsmo: Right, so as someone who turned 20 the year that the web took off, I fall into the category that Douglas Adams said that feels that anything that gets invented before you turn 30 is incredibly exciting and creative, and with any luck, you can make a career out of it. And I did. I got very early on involved in social and web technologies, and more recently I was the editor of the Solid Specification, Tim Berners-Lee’s main project. But what we have today is not what I grew up to love. Most seriously, the technology has opened cracks in the fabric of democracy. This was not inevitable. It happened because some powerful men didn’t have a clue on how to build successful societies, but it can be made really good. Here, enter a key Norwegian innovation. In 2004, the Norwegian constitution was changed to include that the authorities of the state shall create conditions that facilitate open and enlightened public discourse. Unfortunately, there was a sin of omission that the state did not immediately realize that this means that we need to build new institutions. Institutions that develop technology with a democratic mandate rather than just a commercial one, and it has to happen in open global digital common space ecosystems. Because social media is infrastructure, not only for public discourse, but it underpins most of our social activities, like our relationships, our collaboration, our commerce. So my vision for the future is to bring this together. But the key innovation is that we need to build these new institutions, and that is what I love to discuss.


Adelie Dorseuil: Thank you so much. Talking about the future and futuristic endeavors, we can’t forget Norway’s contribution to the search for water on Mars with the ground-penetrating Rimfax radar in 2021. And in 2022, we were ready to let go of the fixed telephony network, and Norway became the first country in the world to divert all traffic through the Internet, a move that we wish upon the rest of the world. Linda, you’re the leader of a tech company working with broadband providers. Why is Internet access through Wi-Fi and broadband important for everyone?


Linda Firveld: Well, thank you for the question. When you look at Norway in 2022 and today, actually, we are close to 100% coverage, which is quite unique. Also, what we see is that Internet and home Wi-Fi is to be considered as the fourth pillar of infrastructure, meaning it’s a household utility. People just expect this to just work just as electricity and water. We also see more connected things than ever in homes today in Norway. That’s quite amazing. I mean, just 10 years ago, it was maybe two or three. So it’s just evolving very rapidly. This means that we are ready for the next wave, what’s coming up now. I like to call it the fifth pillar of infrastructure, meaning it’s a service economy. So you were also, you know, touching that a little bit, and what you also mentioned is very important, which I think if we do it right, it will give empowerment to governments, businesses and people and make us ready for whatever we need to do in the future.


Adelie Dorseuil: Thank you so much. And now fast forward to the present in 2025. The Internet is a world of possibilities which we want to offer to the rest of the world. We’re excited to see the launch of the Affordable Access for Education, Health and Empowerment Act here in Lollestrom at IGF. And you get to be a part of it on Friday at 9am. We don’t know what the future has in store for us. And I was wondering if I could ask my guests to come up with one word, one topic that you think is the next big thing, the thing to watch here at IGF. We can start with you Harald. So in a world that seems to crack everywhere, stay connected. I can continue to say time to unite in these difficult times. And I’d follow up with make it happen. And my words would be digital commons. Mine is democracy. When it comes to the history of the Internet, in the great scheme of things, it’s only just begun. And I would like to invite you to keep on making history and join me in the opening of the IGF 2025. If you would please follow me to the plenary session so we can start the opening. Thank you so much for your attention. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.


A

Adelie Dorseuil

Speech speed

144 words per minute

Speech length

848 words

Speech time

352 seconds

Norway was an early adopter with first ARPANET connection in 1973, ten years before Internet’s official birthday

Explanation

Norway established its first Internet connection through ARPANET in 1973, demonstrating the country’s early adoption of Internet technology. This connection occurred a full decade before 1983, which is commonly considered the official birthday of the Internet.


Evidence

The connection was established in a building called ‘The Basement’ (Kjeller in Norwegian), and there are tours available on Tuesday and Thursday at 5 p.m.


Major discussion point

Norway’s pioneering role in Internet adoption


Topics

Infrastructure | Development


Norwegian TDMA technology became basis for GSM system in 1986

Explanation

In 1986, Norway developed a narrow band TDMA (Time Division Multiple Access) model that outperformed other available technologies. This Norwegian innovation became the foundation for the GSM system, which was established as a European-wide effort.


Evidence

The Norwegian model of TDMA performed better than other available technology and was adopted as the basis for GSM


Major discussion point

Norwegian technological contributions to global communications


Topics

Infrastructure | Economic


Norway developed first mobile web browser (Opera) and presented Winter Olympics online in 1993-1994

Explanation

Norway created the first web browser designed for mobile phones, the Opera browser, which continues to be a leading browser in Africa today. Additionally, Norway pioneered online presentation of the Winter Olympics and established comprehensive health information system platforms during this period.


Evidence

Opera browser is still the leading browser in Africa, and Norway also established a wide-reaching health information system platform


Major discussion point

Early mobile Internet innovations


Topics

Infrastructure | Development | Economic


H

Harald Alvestrand

Speech speed

109 words per minute

Speech length

191 words

Speech time

104 seconds

Universities drove early Internet adoption through UNINET network to enable communication and problem-solving

Explanation

Universities were motivated to adopt Internet standards through the UNINET network because of a fundamental desire to connect people and enable communication. This aligned with Norwegian culture’s emphasis on interconnecting and solving problems at the people level by getting those who can solve problems to talk to each other.


Evidence

Harald worked with the Internet for 40 years, served as chair of the ITF, and was involved in UNINET connecting university networks


Major discussion point

Cultural alignment between Internet values and Norwegian problem-solving approach


Topics

Sociocultural | Infrastructure


Investment needed in growing next generation of Internet leaders beyond the first generation

Explanation

Having worked in the Internet field for 40 years, there’s a recognition that the first generation of Internet pioneers needs to invest time and resources in developing the next generation of leaders. This succession planning is crucial for the continued development and governance of the Internet.


Evidence

Harald has been privileged to work for various employers on Internet development for 40 years


Major discussion point

Generational transition in Internet leadership


Topics

Development | Sociocultural


Internet culture aligns with Norwegian values of interconnection and people-level problem solving

Explanation

The Internet’s fundamental purpose of connecting people and enabling communication fits naturally with Norwegian cultural values. Norwegian culture emphasizes interconnecting people and solving problems at the individual level by facilitating direct communication between those who can address issues.


Evidence

The desire for the Internet to be ‘for everyone’ and the focus on getting people who can solve problems to talk to each other


Major discussion point

Cultural compatibility between Internet principles and national values


Topics

Sociocultural | Human rights


Agreed with

– Kjetil Kjernsmo

Agreed on

Technology should serve democratic and social purposes beyond commercial interests


Disagreed with

– Kjetil Kjernsmo

Disagreed on

Assessment of current Internet technology’s impact on society


K

Kristin Braa

Speech speed

136 words per minute

Speech length

286 words

Speech time

125 seconds

Internet revolutionized Global South through health information systems, scaling from post-apartheid South Africa to 80+ countries

Explanation

Starting as a post-apartheid reconstruction project in 1994, Internet technology enabled revolutionary scaling of health information systems across the Global South. The project evolved from using basic technology like floppy disks to leveraging mobile Internet, allowing countries to leapfrog fixed infrastructure and implement national-scale health systems.


Evidence

Kenya became the first sub-Saharan African country with a totally national scale health information system in 2010, reaching all districts. The DHS2 platform now operates in 80 countries nationally and 130 countries through NGOs like MSF and Red Cross


Major discussion point

Internet’s transformative impact on healthcare systems in developing countries


Topics

Development | Infrastructure | Sociocultural


Disagreed with

– Josef Noll

Disagreed on

Approach to Internet access barriers in developing regions


J

Josef Noll

Speech speed

142 words per minute

Speech length

176 words

Speech time

74 seconds

75% of people in sub-Saharan Africa still don’t use mobile broadband due to cost, requiring inclusive connectivity models

Explanation

Despite technological advances, the majority of people in sub-Saharan Africa remain excluded from mobile broadband access because current pricing models make it unaffordable. This highlights the need to question whether existing models are truly inclusive and designed to get everyone connected.


Evidence

Specific statistic that 75% of people in Africa south of Sahara don’t use mobile broadband because it’s too expensive


Major discussion point

Economic barriers to Internet access in developing regions


Topics

Development | Economic | Human rights


Agreed with

– Harald Alvestrand
– Linda Firveld

Agreed on

Internet as essential infrastructure requiring universal access


Disagreed with

– Kristin Braa

Disagreed on

Approach to Internet access barriers in developing regions


Digital infrastructure should follow a ‘road model’ where basic connectivity enables free access for digital pedestrians and cyclists

Explanation

The proposed model suggests that digital infrastructure should operate like physical roads – someone builds the basic infrastructure, but once established, ‘digital pedestrians and cyclists’ can use it for free. This approach aims to ensure universal access while maintaining sustainable infrastructure development.


Evidence

Shoes made from old tires given by Maasai friends in Kenya after connecting them, symbolizing ‘The Walk on the Internet’ initiative


Major discussion point

Alternative models for inclusive Internet access


Topics

Development | Infrastructure | Economic


K

Kjetil Kjernsmo

Speech speed

133 words per minute

Speech length

270 words

Speech time

121 seconds

Current technology has created cracks in democratic fabric due to lack of proper institutional development

Explanation

The Internet and social media technologies that exist today have damaged democratic processes and institutions. This outcome was not inevitable but occurred because powerful decision-makers lacked understanding of how to build successful democratic societies using these technologies.


Evidence

Kjetil was editor of the Solid Specification, Tim Berners-Lee’s main project, and has early involvement in social and web technologies


Major discussion point

Technology’s negative impact on democratic institutions


Topics

Human rights | Sociocultural | Legal and regulatory


Disagreed with

– Harald Alvestrand

Disagreed on

Assessment of current Internet technology’s impact on society


Norway’s 2004 constitutional change requiring conditions for open public discourse needs new institutions with democratic mandates

Explanation

Norway amended its constitution in 2004 to require state authorities to create conditions for open and enlightened public discourse. However, the state failed to immediately recognize that this constitutional requirement necessitates building new institutions that develop technology with democratic rather than purely commercial mandates.


Evidence

Specific reference to the 2004 Norwegian constitutional change and the concept of ‘sin of omission’ by the state


Major discussion point

Need for democratic governance of technology platforms


Topics

Human rights | Legal and regulatory | Sociocultural


Agreed with

– Harald Alvestrand

Agreed on

Technology should serve democratic and social purposes beyond commercial interests


Social media functions as infrastructure for public discourse, relationships, and commerce requiring democratic oversight

Explanation

Social media platforms should be understood as essential infrastructure that underpins not just public discourse but most social activities including personal relationships, collaboration, and commerce. This infrastructure role necessitates development within open global digital common space ecosystems with democratic governance rather than purely commercial control.


Evidence

Recognition that social media underpins relationships, collaboration, and commerce beyond just public discourse


Major discussion point

Social media as democratic infrastructure requiring public governance


Topics

Human rights | Infrastructure | Legal and regulatory


L

Linda Firveld

Speech speed

144 words per minute

Speech length

175 words

Speech time

72 seconds

Norway achieved near 100% Internet coverage and transitioned to all-Internet traffic by 2022

Explanation

Norway reached close to 100% Internet coverage by 2022, which is quite unique globally. The country also became the first in the world to completely phase out its fixed telephony network and divert all traffic through the Internet, representing a milestone that Norway hopes other countries will follow.


Evidence

Norway was the first country in the world to divert all traffic through the Internet in 2022, abandoning the fixed telephony network


Major discussion point

Complete transition to Internet-based communications infrastructure


Topics

Infrastructure | Development


Internet and home Wi-Fi represent the fourth pillar of infrastructure alongside electricity and water

Explanation

Internet access and home Wi-Fi have become so essential that they should be considered the fourth pillar of infrastructure, joining electricity, water, and other basic utilities. People now expect Internet connectivity to ‘just work’ as a fundamental household utility with the same reliability as traditional utilities.


Evidence

People expect Internet to work just like electricity and water as a household utility


Major discussion point

Internet as essential infrastructure comparable to traditional utilities


Topics

Infrastructure | Development | Human rights


Agreed with

– Harald Alvestrand
– Josef Noll

Agreed on

Internet as essential infrastructure requiring universal access


Connected devices in Norwegian homes increased dramatically, preparing for a service economy as the fifth infrastructure pillar

Explanation

The number of connected devices in Norwegian homes has evolved rapidly from just 2-3 devices ten years ago to many more today. This proliferation of connected devices is preparing Norway for the next wave of development – a service economy that represents what could be called the fifth pillar of infrastructure.


Evidence

Connected devices increased from 2-3 per home just 10 years ago to much higher numbers today


Major discussion point

Evolution toward Internet of Things and service-based digital economy


Topics

Infrastructure | Economic | Development


Agreements

Agreement points

Internet as essential infrastructure requiring universal access

Speakers

– Harald Alvestrand
– Josef Noll
– Linda Firveld

Arguments

Internet should be for everyone


75% of people in sub-Saharan Africa still don’t use mobile broadband due to cost, requiring inclusive connectivity models


Internet and home Wi-Fi represent the fourth pillar of infrastructure alongside electricity and water


Summary

All three speakers agree that Internet access should be universal and treated as essential infrastructure, though they approach it from different angles – Harald from a philosophical perspective, Josef from addressing barriers in developing countries, and Linda from infrastructure classification


Topics

Infrastructure | Development | Human rights


Technology should serve democratic and social purposes beyond commercial interests

Speakers

– Harald Alvestrand
– Kjetil Kjernsmo

Arguments

Internet culture aligns with Norwegian values of interconnection and people-level problem solving


Norway’s 2004 constitutional change requiring conditions for open public discourse needs new institutions with democratic mandates


Summary

Both speakers emphasize that Internet technology should prioritize democratic values and social connection over purely commercial objectives, with Harald focusing on cultural alignment and Kjetil on institutional reform


Topics

Human rights | Sociocultural | Legal and regulatory


Similar viewpoints

Both speakers focus on Internet’s transformative potential for developing countries and underserved populations, with Kristin demonstrating successful implementation in health systems and Josef proposing inclusive access models

Speakers

– Kristin Braa
– Josef Noll

Arguments

Internet revolutionized Global South through health information systems, scaling from post-apartheid South Africa to 80+ countries


Digital infrastructure should follow a ‘road model’ where basic connectivity enables free access for digital pedestrians and cyclists


Topics

Development | Infrastructure | Economic


Both speakers highlight Norway’s pioneering role in Internet infrastructure development and innovation, showcasing the country’s leadership from early adoption to complete digital transition

Speakers

– Adelie Dorseuil
– Linda Firveld

Arguments

Norway developed first mobile web browser (Opera) and presented Winter Olympics online in 1993-1994


Norway achieved near 100% Internet coverage and transitioned to all-Internet traffic by 2022


Topics

Infrastructure | Development


Unexpected consensus

Need for institutional reform in technology governance

Speakers

– Harald Alvestrand
– Kjetil Kjernsmo

Arguments

Investment needed in growing next generation of Internet leaders beyond the first generation


Social media functions as infrastructure for public discourse, relationships, and commerce requiring democratic oversight


Explanation

Despite coming from different backgrounds (technical Internet development vs. democratic technology governance), both speakers unexpectedly converge on the need for new institutional approaches – Harald focusing on leadership succession and Kjetil on democratic governance structures


Topics

Human rights | Sociocultural | Development


Infrastructure as foundation for broader social and economic transformation

Speakers

– Kristin Braa
– Linda Firveld

Arguments

Internet revolutionized Global South through health information systems, scaling from post-apartheid South Africa to 80+ countries


Connected devices in Norwegian homes increased dramatically, preparing for a service economy as the fifth infrastructure pillar


Explanation

Unexpectedly, both speakers from very different contexts (Global South health systems vs. Norwegian broadband industry) agree that Internet infrastructure enables fundamental societal transformation beyond mere connectivity


Topics

Infrastructure | Development | Economic


Overall assessment

Summary

The speakers demonstrate strong consensus on Internet as essential infrastructure, the need for inclusive access models, and technology serving democratic/social purposes. There’s also agreement on Norway’s pioneering role and the transformative potential of Internet for societal development.


Consensus level

High level of consensus with complementary perspectives rather than conflicting views. The speakers approach common themes from different angles (technical, policy, development, commercial) but arrive at similar conclusions about Internet’s fundamental importance and need for inclusive, democratically-governed access. This consensus suggests a mature understanding of Internet governance challenges and opportunities.


Differences

Different viewpoints

Approach to Internet access barriers in developing regions

Speakers

– Kristin Braa
– Josef Noll

Arguments

Internet revolutionized Global South through health information systems, scaling from post-apartheid South Africa to 80+ countries


75% of people in sub-Saharan Africa still don’t use mobile broadband due to cost, requiring inclusive connectivity models


Summary

Kristin emphasizes the revolutionary success of Internet scaling in the Global South through health systems, while Josef highlights that 75% still lack access due to cost barriers, suggesting current models are insufficient


Topics

Development | Infrastructure | Economic


Assessment of current Internet technology’s impact on society

Speakers

– Harald Alvestrand
– Kjetil Kjernsmo

Arguments

Internet culture aligns with Norwegian values of interconnection and people-level problem solving


Current technology has created cracks in democratic fabric due to lack of proper institutional development


Summary

Harald views Internet technology positively as aligning with Norwegian values of connection and problem-solving, while Kjetil sees current technology as damaging to democratic institutions


Topics

Sociocultural | Human rights


Unexpected differences

Optimism vs. concern about Internet’s societal impact

Speakers

– Harald Alvestrand
– Kjetil Kjernsmo

Arguments

Internet culture aligns with Norwegian values of interconnection and people-level problem solving


Current technology has created cracks in democratic fabric due to lack of proper institutional development


Explanation

Unexpected because both are Norwegian Internet pioneers, yet Harald maintains an optimistic view of Internet’s alignment with Norwegian values while Kjetil expresses serious concern about technology’s damage to democracy


Topics

Sociocultural | Human rights


Overall assessment

Summary

The discussion shows moderate disagreement primarily around the effectiveness of current Internet models and technology’s impact on society, with speakers agreeing on goals but differing on approaches


Disagreement level

Low to moderate disagreement level. Most speakers share common goals of universal access and democratic values, but differ on assessment of current progress and methods to achieve objectives. This suggests healthy debate within a shared framework rather than fundamental ideological divisions.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers focus on Internet’s transformative potential for developing countries and underserved populations, with Kristin demonstrating successful implementation in health systems and Josef proposing inclusive access models

Speakers

– Kristin Braa
– Josef Noll

Arguments

Internet revolutionized Global South through health information systems, scaling from post-apartheid South Africa to 80+ countries


Digital infrastructure should follow a ‘road model’ where basic connectivity enables free access for digital pedestrians and cyclists


Topics

Development | Infrastructure | Economic


Both speakers highlight Norway’s pioneering role in Internet infrastructure development and innovation, showcasing the country’s leadership from early adoption to complete digital transition

Speakers

– Adelie Dorseuil
– Linda Firveld

Arguments

Norway developed first mobile web browser (Opera) and presented Winter Olympics online in 1993-1994


Norway achieved near 100% Internet coverage and transitioned to all-Internet traffic by 2022


Topics

Infrastructure | Development


Takeaways

Key takeaways

Norway has been a consistent early adopter and innovator in Internet technology, from the first ARPANET connection in 1973 to becoming the first country to transition all traffic through Internet by 2022


The Internet’s transformative power is most evident in the Global South, where it enabled leapfrogging of traditional infrastructure and scaling of critical services like healthcare across 80+ countries


Internet access should be treated as essential infrastructure (the ‘fourth pillar’ alongside electricity and water) and requires inclusive models to ensure universal access


Current Internet governance faces democratic challenges, with technology creating ‘cracks in the fabric of democracy’ due to inadequate institutional frameworks


The Norwegian model demonstrates that Internet development should align with cultural values of interconnection and people-level problem-solving


Investment in developing the next generation of Internet leaders is crucial for sustaining Internet development beyond the first generation of pioneers


Resolutions and action items

Launch of the Affordable Access for Education, Health and Empowerment Act scheduled for Friday at 9am during IGF


Tour of Kjeller (The Basement) facility recommended for Tuesday and Thursday at 5 p.m. to learn about Internet history


Need to build new institutions that develop technology with democratic mandates rather than just commercial ones


Requirement to create open global digital common space ecosystems for social media infrastructure


Unresolved issues

75% of people in sub-Saharan Africa still cannot access mobile broadband due to cost barriers


How to effectively implement the ‘road model’ for digital infrastructure to ensure free access for basic users


How to address the democratic governance gap in social media and Internet infrastructure


How to scale successful connectivity models from remote locations like Maasai villages to broader populations


How other countries can replicate Norway’s success in achieving near 100% Internet coverage


Suggested compromises

The ‘road model’ approach where infrastructure builders invest in connectivity while allowing free access for ‘digital pedestrians and cyclists’


Balancing commercial and democratic mandates in technology development through new institutional frameworks


Thought provoking comments

But what we have today is not what I grew up to love. Most seriously, the technology has opened cracks in the fabric of democracy. This was not inevitable. It happened because some powerful men didn’t have a clue on how to build successful societies, but it can be made really good.

Speaker

Kjetil Kjernsmo


Reason

This comment is deeply insightful because it shifts the entire discussion from celebrating technological achievements to confronting the unintended consequences of internet development. It introduces a critical perspective that challenges the prevailing narrative of technological progress as inherently positive, and specifically identifies the threat to democratic institutions.


Impact

This comment fundamentally changed the tone of the discussion from celebratory to reflective and critical. It moved the conversation beyond technical achievements to examine the societal implications of internet development, setting up the framework for discussing solutions like democratic institutions and digital commons.


In 2004, the Norwegian constitution was changed to include that the authorities of the state shall create conditions that facilitate open and enlightened public discourse. Unfortunately, there was a sin of omission that the state did not immediately realize that this means that we need to build new institutions. Institutions that develop technology with a democratic mandate rather than just a commercial one.

Speaker

Kjetil Kjernsmo


Reason

This is a profound observation that connects constitutional principles to technological governance. It’s thought-provoking because it identifies a specific gap between democratic ideals and technological implementation, proposing that technology development should have democratic rather than purely commercial mandates.


Impact

This comment introduced the concept of institutional innovation as necessary for democratic technology governance. It provided a concrete example of how legal frameworks need to evolve to address technological challenges, influencing the discussion toward governance solutions.


So being able to utilize and leapfrog the fixed net, totally no fixed net, no fixed phone, only mobile Internet… That was a revolution in 2010.

Speaker

Kristin Braa


Reason

This comment is insightful because it illustrates how developing countries can bypass traditional infrastructure limitations through mobile technology. It demonstrates that technological advancement doesn’t always follow linear paths and that constraints can sometimes lead to innovative solutions.


Impact

This shifted the discussion from a Norway-centric perspective to a global one, showing how internet development can have different trajectories in different contexts. It introduced the concept of technological leapfrogging and expanded the conversation to include Global South perspectives.


Despite what Christian said, we still have 75% of people in Africa, south of Sahara who don’t use mobile broadband because it’s too expensive. So we should ask ourselves, are the models which we are using, inclusive models?

Speaker

Josef Noll


Reason

This comment is thought-provoking because it challenges the optimistic narrative about mobile internet adoption by highlighting persistent inequality. It forces a critical examination of whether current business models are truly serving universal access goals.


Impact

This comment grounded the discussion in current realities and introduced the critical question of inclusivity in internet access models. It led to the introduction of innovative approaches like ‘The Walk on the Internet’ and the road metaphor for digital infrastructure.


Internet and home Wi-Fi is to be considered as the fourth pillar of infrastructure, meaning it’s a household utility. People just expect this to just work just as electricity and water.

Speaker

Linda Firveld


Reason

This comment is insightful because it reframes internet access from a luxury or service to a fundamental utility, comparable to basic infrastructure needs. It suggests a fundamental shift in how society conceptualizes internet access.


Impact

This comment elevated the discussion about internet access to the level of basic human needs and infrastructure rights. It provided a framework for understanding why universal access is not just desirable but necessary, and set up the concept of a ‘fifth pillar’ representing the service economy.


Overall assessment

These key comments transformed what began as a celebratory historical overview of Norwegian internet achievements into a nuanced discussion about democracy, inequality, and the future of internet governance. Kjernsmo’s critical perspective on democracy was particularly pivotal, shifting the entire tone from triumphant to reflective. The comments collectively moved the discussion through three phases: celebration of technical achievements, recognition of persistent inequalities and democratic challenges, and finally toward solutions involving new institutions and inclusive models. The speakers built upon each other’s insights, creating a comprehensive narrative that connected local Norwegian innovations to global challenges and future governance needs.


Follow-up questions

How can we invest in and grow the people who will take over from the first generation of Internet pioneers?

Speaker

Harald Alvestrand


Explanation

This addresses the critical need for knowledge transfer and capacity building as the original Internet pioneers age out of active roles


How can we develop inclusive models that ensure everyone can access and afford mobile broadband, particularly in regions where 75% of people don’t use it due to cost?

Speaker

Josef Noll


Explanation

This highlights the ongoing digital divide issue and the need for sustainable, affordable connectivity solutions in underserved regions


How can we build new institutions that develop technology with a democratic mandate rather than just a commercial one?

Speaker

Kjetil Kjernsmo


Explanation

This addresses the need for governance structures that prioritize democratic values and public interest over purely commercial interests in technology development


How can we create open global digital common space ecosystems for social media infrastructure?

Speaker

Kjetil Kjernsmo


Explanation

This focuses on developing alternative models for social media that serve as public infrastructure rather than private commercial platforms


How can we address the cracks in democracy that technology has created?

Speaker

Kjetil Kjernsmo


Explanation

This addresses the urgent need to understand and mitigate the negative impacts of current technology implementations on democratic processes and institutions


How can we prepare for and implement the ‘fifth pillar of infrastructure’ – the service economy enabled by ubiquitous connectivity?

Speaker

Linda Firveld


Explanation

This explores the next phase of digital transformation where connectivity enables new forms of service delivery and economic models


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #53 Leveraging the Internet in Environment and Health Resilience

WS #53 Leveraging the Internet in Environment and Health Resilience

Session at a glance

Summary

This session at the Internet Governance Forum in Norway focused on leveraging the Internet for environment and health resilience, co-moderated by Jorn Erbguth and members of the Dynamic Coalition on Data-Driven Health Technologies. The discussion explored how digital technologies can enhance healthcare resilience while also introducing new vulnerabilities and risks of abuse.


Speakers emphasized that while Internet connectivity proved crucial during COVID-19 for telemedicine and remote healthcare services, total dependence on networks creates vulnerabilities to cyberattacks and system outages that can disrupt clinical decisions and supply chains. The session highlighted concerns about how large-scale health data collection and AI systems could perpetuate or create new inequities, using the example of funding disparities between breast cancer and prostate cancer research despite similar incidence and mortality rates.


Participants from developing regions, particularly the Caribbean and Africa, shared challenges including limited Internet access, high costs, natural disasters, cultural barriers, and dependence on external funding and expertise. They stressed the importance of digital literacy, community engagement, and culturally relevant solutions that integrate local languages and address affordability concerns.


Several speakers presented examples of successful digital health initiatives, including malaria modeling projects, air quality monitoring systems, and outbreak surveillance platforms. However, they emphasized that equity must be a design principle rather than an afterthought in developing these technologies.


The discussion concluded with calls for unified regulatory frameworks, multi-stakeholder collaboration, and a renewed digital social contract that prioritizes people and planet over profit. Participants agreed that while technology offers tremendous opportunities to improve healthcare and environmental resilience, careful governance is essential to manage risks and ensure equitable access to benefits.


Keypoints

## Major Discussion Points:


– **Digital divide and accessibility challenges in healthcare technology**: Speakers from developing regions (Caribbean, Africa) highlighted significant barriers including limited internet access, high costs, lack of digital literacy, and cultural resistance to adopting digital health solutions.


– **Data quality, validity, and governance in health and environmental monitoring**: Discussion focused on ensuring the reliability of data collected through IoT devices and sensors for environmental health surveillance, emphasizing the need for standardized collection methods and unified regulatory frameworks.


– **AI’s dual role in healthcare – benefits versus risks**: Participants explored how AI can enhance healthcare delivery and environmental monitoring while raising concerns about algorithmic bias, transparency, and the potential for governments to misuse data for discriminatory resource allocation.


– **Integration of environmental and health data for climate resilience**: Speakers emphasized the WHO mandate that health issues are integral to climate change, discussing how internet-enabled systems can support early warning systems, disease surveillance, and resource allocation during environmental crises.


– **Balancing technological advancement with human-centered care**: The discussion addressed concerns about maintaining empathy and compassionate patient care while leveraging AI-driven healthcare solutions, with speakers noting that technology should complement rather than replace human connection.


## Overall Purpose:


The discussion aimed to explore how internet technologies, data systems, and AI can be leveraged to build resilience in both environmental and health challenges, while addressing the risks and inequities these technologies may introduce. The session was part of the Internet Governance Forum’s Dynamic Coalition on Data-Driven Health Technologies.


## Overall Tone:


The discussion maintained a collaborative and constructive tone throughout, with speakers sharing practical experiences and challenges from diverse global perspectives. While acknowledging significant obstacles – particularly around digital divides and potential misuse of technology – the overall tone remained optimistic about technology’s potential to improve health and environmental outcomes. The conversation was academic yet accessible, with participants building on each other’s points and offering complementary perspectives rather than conflicting viewpoints.


Speakers

**Speakers from the provided list:**


– **J Amado Espinosa L** – Veteran in medical informatics, founder of Medicist, key figure in digital health reform in Latin America


– **Yao Amevi A. Sossou** – Internet governance advocate and youth mobilizer from Benin, member of the We the Internet coalition


– **Audience** – Marcelo Fornasin from Oswaldo Cruz Foundation Brazil, researcher in public health


– **Jorn Erbguth** – Session co-moderator


– **Henrietta Ampofo** – Medical doctor with strong environmental and health advocacy background, former focal point of UNEP’s children and youth major group


– **Frederic Cohen** – Data-driven governance advocate with expertise in IGF and WSIS


– **June Parris** – Retired specialist nurse in primary care mental health, former MAG member at UN and Civicus, active ISOC and Civil Society health initiatives member from Barbados


– **Amali De Silva-Mitchell** – Founder of the Dynamic Coalition on Data-Driven Health Technologies, session co-moderator


– **Joao Rocha Gomes** – Online moderator, medical professional


– **Alessandro Berioni** – Medical doctor, chair of the Young Working Group of the World Federation of Public Health Associations


– **Houda Chihi** – Senior researcher in wireless and green communication from Tunisia, PhD in telecommunications


– **Jason Millar** – Environmental professional from Barbados


**Additional speakers:**


None identified beyond the provided speakers names list.


Full session report

# Leveraging the Internet for Environment and Health Resilience: A Comprehensive Discussion Report


## Executive Summary


This session at the Internet Governance Forum in Norway examined the critical intersection of digital technologies, environmental health monitoring, and healthcare resilience. Co-moderated by Jorn Erbguth and members of the Dynamic Coalition on Data-Driven Health Technologies, the discussion brought together diverse stakeholders from across the globe to explore how internet-enabled systems can enhance public health whilst addressing the inherent risks and inequities these technologies may introduce.


The session opened with Amali De Silva-Mitchell’s call for “thinking globally and integrated,” emphasizing that the WHO has mandated health matters as an integral part of climate change issues. This framework set the stage for a comprehensive exploration of how digital technologies present both unprecedented opportunities and significant challenges for environmental health monitoring, disease surveillance, and healthcare delivery.


Jorn Erbguth introduced the central theme of internet connectivity as a “double-edged sword” in healthcare, providing specific examples of how the same technologies that enabled telemedicine during COVID-19 also created new vulnerabilities to cyber attacks that could disrupt clinical decisions and supply chains.


## Key Thematic Areas


### The Double-Edged Nature of Digital Health Technologies


Jorn Erbguth established the foundational framework for the discussion by highlighting how internet connectivity creates both opportunities and risks in healthcare. He noted that while digital technologies enabled crucial healthcare delivery during the pandemic, they simultaneously made healthcare systems vulnerable to cyber attacks that could affect clinical decisions and supply chains.


This duality was reinforced throughout the session, with speakers consistently acknowledging that digital health solutions offer transformative potential while introducing new forms of risk and inequality. The challenge, as articulated by multiple participants, lies in maximizing benefits while mitigating these inherent risks.


### Regional Perspectives on Digital Health Implementation


#### Caribbean Challenges and Opportunities


June Parris, a retired specialist nurse in primary care mental health and former MAG member from Barbados, provided detailed insights into Caribbean healthcare challenges. She emphasized that “we are in a developing world and we have all these problems… We have the natural disasters, we have sargassum weed” that create ongoing disruptions to healthcare delivery.


Jason Millar, speaking as an environmental professional about Caribbean challenges, articulated the constraints of external dependency: “Any funding agency that targets us for aid will usually make us an offer, but at the same time, that also will come with terms and conditions or a set of constraining factors that will limit the actual potential to maybe fully address an issue in a way that is fully beneficial for us.”


Millar provided specific examples of environmental health challenges facing Barbados, including Sahara dust affecting respiratory health, sargassum seaweed creating coastal health issues, Hurricane Beryl’s recent impacts, and agricultural pesticide concerns. These concrete examples illustrated how environmental and health challenges intersect in small island developing states.


#### African Perspectives on Data-Driven Health Solutions


Henrietta Ampofo, a medical doctor speaking from the AMNET conference in Dakar, presented practical examples of successful data integration in African contexts. She described malaria modeling projects that use climate data to predict disease patterns and allocate resources more effectively, demonstrating how environmental variables can be integrated into health planning to create more responsive healthcare systems.


Her examples showed that despite connectivity challenges, innovative approaches can overcome some infrastructure limitations and create effective data-driven health solutions even in resource-constrained environments.


### Digital Divide and Accessibility Barriers


Alessandro Berioni, chair of the Young Working Group of the World Federation of Public Health Associations, highlighted that “only two thirds of the global population are online and have access to internet and 2.6 billion are not achieving actually connection.” This statistic provided crucial context for understanding the scale of accessibility challenges facing digital health implementation.


The digital divide emerged as a universal challenge across all regions represented. June Parris noted that economic barriers make it difficult for Caribbean countries to keep up with developed nations’ health technology, while the cost of internet access and maintenance creates substantial barriers where healthcare systems already struggle with resource constraints.


### Trust, Cultural Barriers, and Community Engagement


Yao Amevi A. Sossou provided compelling research findings that challenged assumptions about technology adoption: “Most of the patients I interviewed during the research they didn’t trust on the solutions and even the doctors those that use the platform they confirmed that none of their patients are using the app.”


This insight revealed that technical solutions alone are insufficient without addressing fundamental trust issues. Sossou explained that patients often prefer self-medication over formal healthcare due to mistrust of healthcare systems, and this mistrust extends to digital health tools. The lack of awareness about available digital health solutions compounds these challenges.


June Parris acknowledged cultural resistance to change in island communities, emphasizing that healthcare professionals must create empathy with patients to build trust. She stressed that technology implementation requires careful attention to human relationships and cultural sensitivity.


### AI Integration and Algorithmic Bias Concerns


The role of artificial intelligence generated nuanced discussion about both opportunities and risks. Jorn Erbguth presented a thought-provoking warning: “AI can either propose optimal care or be used by governments or industry to triage healthcare according to criteria, quietly determining who receives high-cost therapies and who is excluded.” He emphasized that “these inequities will not be transparent.”


To illustrate this concern, Erbguth cited specific funding disparities: breast cancer research receives significantly more funding than prostate cancer research despite similar incidence and mortality rates, demonstrating how existing biases can be perpetuated or amplified through algorithmic systems.


Alessandro Berioni highlighted a fundamental structural problem: “The algorithms are mostly based on engagement drive. So they’re engagement-driven algorithms rather than value-driven algorithms.” He called for a renewed digital social contract to put people and planet before profit and engagement algorithms.


Interestingly, Erbguth also presented counterintuitive research findings: “Studies have shown that people tend to see more empathy in AI than in human doctors. Human doctors are often stressed under time pressure, and sometimes they don’t act with the empathy we would like them to act with.” This observation sparked discussion about how technology might complement rather than replace human care.


### Cyber Sustainability and Data Governance


Houda Chihi, a senior researcher in wireless and green communication from Tunisia with a PhD in telecommunications, introduced the innovative concept of “cyber sustainability” – combining cybersecurity practices with environmental protection. This approach recognizes that cybersecurity measures must be integrated with sustainability principles to create comprehensive protection for both data and environmental health.


Chihi emphasized the importance of testing and validation before deploying healthcare solutions, noting that the reliability of environmental health data is crucial for effective public health responses. She advocated for collaboration between technical communities, academia, and medical experts to ensure solutions meet both technical and clinical requirements.


### Environmental Health Monitoring and Technology Integration


Joao Rocha Gomes presented specific examples of internet-enabled environmental health monitoring, including air quality mapping initiatives and early warning systems for climate-related health threats. His presentation highlighted how IoT devices and sensors can monitor environmental factors affecting health, though he emphasized the need for proper validation and integration with existing health systems.


J Amado Espinosa L advocated for integrating environmental variables into personal health records, suggesting that individual health management should incorporate broader environmental context to enable more personalized and environmentally-informed healthcare delivery.


## Areas of Strong Consensus


### Universal Challenge of Digital Divide


Multiple speakers from different regions achieved strong consensus that the digital divide creates significant barriers to healthcare access. This consensus was particularly notable because it emerged from speakers representing different geographic regions and professional backgrounds, suggesting these challenges require coordinated global responses.


### Need for Human-Centered Approaches


Despite disagreements about AI’s role, speakers agreed that technology should complement rather than replace human empathy in healthcare. There was clear consensus that healthcare professionals must maintain empathy with patients while educating themselves about emerging technologies.


### Importance of Cultural Appropriateness


Speakers agreed that digital health solutions must be designed with community input, cultural considerations, and local context in mind. This consensus emphasized the need for locally integrated solutions rather than one-size-fits-all approaches.


## Key Areas of Disagreement


### Role of AI in Healthcare Empathy


An unexpected disagreement emerged regarding AI’s capacity for empathetic care. Jorn Erbguth suggested that studies show people perceive more empathy in AI than in stressed human doctors, while other speakers emphasized the irreplaceable importance of human empathy and genuine patient connection.


### Primary Implementation Barriers


Speakers showed different emphases regarding the primary barriers to digital health adoption. June Parris focused on economic and infrastructure barriers, Yao Sossou emphasized cultural mistrust and awareness issues, while Alessandro Berioni highlighted the global digital divide as the fundamental challenge.


### Governance Approaches


While speakers agreed on the need for better governance of digital health solutions, they disagreed on approaches. Some emphasized global frameworks, others focused on regional policies, while technical speakers emphasized the importance of testing and validation protocols.


## Critical Insights and Unresolved Issues


The discussion identified several critical unresolved issues requiring further attention. Establishing unified regulatory mechanisms for health data collection across different regions remains challenging, particularly given varying national priorities and regulatory frameworks.


The question of balancing AI-driven healthcare efficiency with essential human empathy requires ongoing exploration, especially as healthcare systems face increasing resource pressures.


Fundamental affordability barriers that make digital health solutions inaccessible to low-income populations need innovative solutions, including sustainable funding models that don’t come with restrictive external constraints.


## Q&A Highlights and Community Engagement


A particularly insightful question from IGF Ghana addressed how nurses can balance AI integration with maintaining empathy in patient care. This prompted detailed responses from multiple speakers about the complementary role of technology in healthcare delivery.


The interactive discussion revealed that successful digital health implementation requires moving beyond purely technical solutions to embrace community engagement and integrated governance frameworks that prioritize equity and sustainability.


## Recommendations and Next Steps


The session concluded with concrete action items. Participants were encouraged to join the Dynamic Coalition on Data-Driven Health Technologies and access published documents on the IGF homepage. An upcoming hackathon on “Shaping the Future of Health” was announced for July 2nd, with a global call for AI-powered social innovation ideas to be launched in late July.


Follow-up sessions were planned, including continued discussion at WSIS and the World Federation of Public Health Associations conference in Cape Town on September 26th.


## Conclusion


This comprehensive discussion revealed that leveraging the internet for environment and health resilience requires careful navigation of complex technical, social, and ethical challenges. The session demonstrated that while digital technologies offer unprecedented opportunities for environmental health monitoring and healthcare delivery, successful implementation requires addressing fundamental issues of equity, trust, cultural appropriateness, and governance.


The strong consensus around key challenges provides a foundation for collaborative action, while the unresolved issues around regulatory frameworks, sustainable funding, and algorithmic accountability require continued multi-stakeholder engagement. As participants work toward the renewed digital social contract advocated during the session, the insights provide valuable guidance for ensuring that internet-enabled health and environmental solutions truly serve the goal of building resilience for all, rather than perpetuating existing inequities or creating new forms of digital exclusion.


The path forward requires sustained collaboration between technical communities, healthcare professionals, policymakers, and communities to ensure that digital health innovations prioritize people and planet over profit, as emphasized throughout this thought-provoking discussion.


Session transcript

Jorn Erbguth: I guess we are live now. Welcome to this session, Leveraging the Internet in Environment and Health Resilience. And I would like to also welcome Amali, who will be co-moderating. She should be online here. Do we see her? Okay, here. Amali has founded the Dynamic Coalition on Data-Driven Health Technologies and we are proud to be able to moderate this session at the IGF in Norway. And so, be with us. We should now be able to see the next slide. Yes. So, this is our agenda. And Amali, maybe you would like to introduce the agenda?


Amali De Silva-Mitchell: Thank you, Jörn. I will actually pass the online moderation to Dr. Joao Gomes very quickly. But I just want to make a couple of points before this session starts. And I want everybody to think globally and integrated. And when you’re making policy decisions, please think as an integrated policymaking opportunity and especially governance frameworks and so forth. Just want to make this very simple point. The WHO has mandated that the health matters are integral part of climate change issues. And we need to look at that from the perspective of the whole community. So, an ecosystem of services. And this will include public safety, emergency, ambulance, hospitals, doctors and citizens and so forth. So, I just want to make this very simple comment about this importance of integrating our services and how we have a tremendous opportunity with ICTs to help in this enabling of this very important situation of the climate and how humans, plants, animals are going to survive for the future. And we’ll just hand this over now to Joao, please. Joao, please take it from here.


Joao Rocha Gomes: Hello, everyone. Good to see you. I hope you can hear me well in the room. Thank you for the quick introduction. And also, good welcome to L’Irchtrom. I’m not able to be in person with you, but hopefully I can support online as well with the speakers that we’re going to have online. I’ll just make a brief note about the agenda. So, as you can see, projected at the moment as well, we will start with some keynote introductions. We have several speakers that will join us. They are not all part of the Dynamic Coalition on Data-Driven Technologies. So, we have diverse backgrounds of people joining us. Then we will have mostly the participants of this Dynamic Coalition sharing their thoughts on this topic on a lightning round. So, we will have shorter times for each one assigned. We are also mindful of the time. And we will have everyone speaking and sharing their thoughts on that. And then I will bridge into a discussion section where the room will be open for comments, questions, or ideas from anyone either online, which I will take care of and collect and then share to the room, or in the room itself. And Jørn will bridge the questions to the speakers and make sure that the comments have a good grounding and a good place for them to be shared. And without further notice, I will pass now the word back to the room, to Jørn, so that we can proceed with the agenda.


Jorn Erbguth: Thank you, Jørn. I would like to take a quick look at our topic at resilience, vulnerability, and control in the context of leveraging the Internet in environment and health resilience. Of course, we have seen that connectivity boosts resilience in the IGF. Sorry, during COVID, we have seen that the Internet gave us a possibility to continue lots of activities, including healthcare, telemedicine, from remote diagnostics to robot-assisted surgery. The Internet kept services running when COVID hit, when natural disasters occur, when we have pandemics or political unrest, and that block physical access. But of course, the total dependence on the network also makes outages instantly disruptive for clinical decisions, supply chains, and payment systems. It also makes hospitals vulnerable to cyber attacks, as we all know. When we go a step further with large-scale health data collection, it’s also a double-edged sword, meaning that population-wide data sets can strengthen surveillance, early warning, and resource allocation concerning diseases, yet the same data can establish new inequities or hardwire old ones. Visible, for instance, in the persistence funding skewed towards breast cancer research over prostate cancer studies. Just one example I would like to point out in the next slide. When it comes to AI, this double-edged sword will even further sharpen. AI can either propose optimal care or be used by governments or industry to triage healthcare according to OPAC criteria, quietly determining who receives high-cost therapies and who is excluded. So the internet, data, and AI can strengthen healthcare resilience and resilience to environmental challenges, but they also introduce not only additional technical vulnerabilities, but more worrying still, a hard-to-detect potential for government abuse. So let’s take a look at some statistics. Here we see that the incidence of breast cancer and prostate cancer is about the same. The mortality is also about the same, but the funding is far from the same. We see the government funding is almost only half, and the philanthropic funding is even much less than half. This is just one example, and imagine that the data and AI can be used or abused by governments to focus on specific population groups where they want to allocate more funding, and the data will make it possible to analyze exactly where such funding has an effect on increasing inequities. And this, of course, is a risk that we face, and these inequities will not be transparent. They will not be visible because those decisions can be made in the dark and can be hidden behind decisions that seem to be neutral. So just as a short introduction where there are hidden risks, and with this I don’t want to extend my time further and give back to Jean to moderate the next keynotes.


Joao Rocha Gomes: Thank you. Thank you so much, Bjorn. I will move on very quickly now to June Parris. I believe she’ll be in the room with you all. She’s a retired specialist nurse in primary care mental health, a former MAG member at UN and Civicus, and an active ISOC and Civil Society health initiatives member from Barbados. So, June, if you are with us in the room, I will now give you the word.


June Parris: Good evening. Good afternoon to everyone in the room and everyone online. My name is June Parris. As he mentioned, I am retired, but I have been involved with IT for a number of years as a nurse. I worked in Europe and all health care in Europe, but I would like to say in the UK. It’s connected to the internet and connected to health care systems. So I’ve had those experiences working in a developed country, and then I retired to Barbados. It wasn’t the same when I got there. It’s not easy for them to keep up with what is happening in the first world. My colleague has alluded to some of the problems that we would have faced, mainly economics. Catching up is difficult, and we have to rely on expertise from overseas, Europe and North America. We also have to rely on experts coming in to give advice, and we also rely a lot on funding. How far does this funding go? In the Caribbean, we suffer from natural disasters. So we take one step forward and two steps backward most of the time. Therefore, any systems that are put in place are funded by experts and expertises, people from outside and North Americans and Europeans. We have to keep repeating this funding and spreading the resources out to accommodate all the changes that we would want to make. As you can see, that would be very difficult with limited resources. So apart from natural disasters, there’s also a climate of culture. Do we really want change? Is it easy to make changes in a culture, in an island culture, where people have islands? The way we think in the islands is not the same as the way we think in Europe, having the experience working in Europe. Basically, we don’t really think outside the box. So, there’s also other problems, namely the cost of the Internet, access to the Internet, maintenance of Internet systems, and basic use of the Internet. We are in a developing world and we have all these problems, you know. We have, as I mentioned before, we have the natural disasters, we have sargassum weed. Have you ever heard of sargassum? That is creating absolute problems on the island. Health as well as food shortages in terms of fishing. We had a natural disaster a few months ago where fishing boats were destroyed and they were unable to fish, to go fishing for a while. Therefore, the cost of living went up, health problems increased, and then, you know, it all reflects back on finances and economics. Okay, we’re trying, we’re making changes, we’re trying to improve, we’re employing experts, we’re receiving funding, lots of funding, but we have to put it to good use. Therefore, where does the Internet come in? How can we afford to keep up to date with all the other places in the world that are way ahead of us? So, I’m thinking that we need to educate. Education is very important. We need good use of resources, we need to improve our facilities, and we need to think ahead and plan ahead by having more research on the systems, trying to understand how to deal with these systems, and to put Internet use in good use, I would say. So, this is, I think I’m going to wrap up now. I think I’ve said everything that I want to say, and my other colleagues will add to it.


Joao Rocha Gomes: Thank you so much, June, and I believe this is a great segue as well to Jason Millar, which we have online speaking as well on this topic. I will now pass on the word to him, and I believe you can follow up on your words as well. Thank you.


Jason Millar: Hi, good morning, everyone. I hope you can hear me. So, as June would have just mentioned, we in the Caribbean have a number of not entirely unique, but definitely very in our face challenges. We depend very heavily on external resources, which means that any funding agency that targets us for aid will usually make us an offer, but at the same time, that also will come with terms and conditions or a set of constraining factors that will limit the actual potential to maybe fully address an issue in a way that is fully beneficial for us based on some of the boundaries that were put in place. We have many challenges as it relates to air quality because we get affected by things like the Sahara, which is a fairly constant flow across the Atlantic. It’s a very important and natural process, but certain aspects of these processes have been enhanced by climate change. For example, as June mentioned, the intrusion of seaweed, which continues to get worse within the Caribbean region. We also have the formation of tropical systems, such as Hurricane Beryl, which did significant damage to us, even though it didn’t hit us entirely directly, and as June would have mentioned, destroyed much of our fishing fleet. We also have anthropogenically caused issues within our islands, such as outputs from industrial agricultural processes. Barbados is known for production of sugar, and we have many cane fields which farmers use pesticides and herbicides on to control weeds and that sort of stuff. But because of the permeable nature of our rock that seeps into our groundwater supply, we have a lot of individual challenges that can combine into larger challenges. In seeking external help, we have to depend on the boundaries that are put in place by every funding agency that offers us help, and it may not allow us to fully remediate any of the issues. We also have problems with fires. There’s a culture of burning in Barbados, for example, where despite having laws to regulate the hours within which you can burn in the areas in which you can burn, vast sections of open lots and other areas where there are no physical buildings are burned yearly because that’s the culture. That’s the past tense, and dissemination of information is difficult in this age because of things like the rise of the influencer as opposed to cultures moving more towards embracing information from specific state sources. The internet has been used to a point by the state. I’m using Barbados as an example, but not enough to say that all of the potential for its use has been manifested. We could have more information about proper waste disposal sites to help with our waste issue. Dumping is a very large issue in Barbados as well. All of these things affect the soil quality due to the permeable nature of our rock, our groundwater systems, our baths, and that is within a very small surface area. So, the potential for the internet to regulate public perception is very strong in Barbados, and it has not been used as well as it could in the future. This could be changed in a more meaningful way. Sorry, I’m losing signal. Yes, it could be leveraged in a more meaningful way to make sure that people are more aware of certain aspects of governance within Barbados and policies that have been made to protect the public as opposed to being used to spread misinformation about these same things. I think I will wrap up there.


Joao Rocha Gomes: Thank you so much, Jason. I didn’t properly introduce you, but I believe it was clear from your speech that you’re working in the context of Barbados as well as an environmental professional. It was very clear that also the problems that you shared are not just one-sided or at least one-viewed, because June Perry had also mentioned many of the aspects that were voiced and echoed, but definitely you added a different perspective to the topic. So, thank you for that. Without further ado as well, I will also introduce Henrietta Mpofo. She’s a medical doctor with a strong environmental and health advocacy background and a formal focal point of UNEP’s children and youth major group. So, I will now give you the words, Henrietta. I believe you are with us online. You are muted, Henrietta.


Henrietta Ampofo: Hello. Merci beaucoup. Hi, I’m Dr. Henrietta Mpofo, as rightly introduced. I will just touch on solutions and prospects looking at our topic, environment, health, resilience, and the Internet, and how do these come together. Currently, I’m speaking from Dakar, and I’m at an AMNET conference, Applied Malaria Modelling Conference. It’s an epitome of how these three domains come together to solve problems. Over here, I’m speaking to researchers, and AMNET is being sponsored by the Gates Foundation to empower and equip researchers with skills to be applied in malaria modelling. Now, how does the Internet come in? How does the environment come in? And how do we provide solutions in these spheres? So, for example, right, with the malaria modelling, you can find out at which point in the year you have an increased incidence of malaria, especially from the effects of climate change and the fact that malaria is a vector-borne disease. You may have an increased incidence of malaria during certain seasons. And if you’re able to put in that model and be able to identify the factors that are increasing it, now you are able to, as June mentioned, allocate funding to areas that are in need. You can even subdivide it into populations that are even more susceptible. And so these are the ways by which the Internet can support this, right? Now, most of the modelling, we are using data sets, right? And these data sets are sitting on servers and being accessed through cloud computing. Some of the modelling is done online. And the data sets and the information is available to researchers and interested participants or interested policymakers online. So the Internet now is facilitating the combination of health resilience, just using malaria as an example, to be able to bring out solutions and to be able to implement interventions in a timely manner. If we are able to now… have this data broken down and you realize that a particular population, maybe children under five are more susceptible, that can also influence, you know, vaccine interventions, right? The fact that we think that vaccines, malaria vaccines, should be given to them. And so in all of this, the internet is very integral in the area of climate, in the area of environmental health, in the area of health resilience. And I would like to end here. I’m sure there are other people that can share more wonderful experiences, but we must always remember that internet governance is integral. It supports the infrastructure on which we can not just identify problems, not just offer solutions, but also disseminate solutions, right? We talk about integrity of the data. We are talking about cyber security. How safe is it? How are we sure that the data has not been tampered with? How accessible is it, right? Different parts of the world in Africa, where we have connectivity issues. So all these things come to play when we want to


Joao Rocha Gomes: use the internet for good. Thank you very much. Thank you so much, Henrietta. And also best of luck of voicing these concerns in Dakar, where you are at the moment. Thank you for finding the time as well to join. And I will also now pass the word to Alessandro Berioni. He is, just like Henrietta, a medical doctor. And he’s currently a chair of the Young Working Group of the World Federation of Public Health Associations. And I believe you have a presentation to share. So let’s see if that also works out with you sharing the screen. Feel free. Thank you so much, Joao, for the


Alessandro Berioni: kind introduction. And I hope you can see my presentation. Can you? Yes, it’s still loading. We see a WhatsApp conversation. Okay, that’s not good. Okay, that’s not good. Let me try. I will show now. Try again. Can you see it now? Can you now? Not yet. I know it. Yes. Okay, greatly. Hello, everyone. Good morning. Thanks for the invitation. And I hope you are doing fresh in Oslo. Here is boiling in Rome. As Joao said, I’m the chair of the Young WFPHA. Young, yes, WFPHA. It’s the working group of the World Federation of Public Health Associations. I wanted to start addressing my presentation with this title that I think resonates quite well with the topic that you were mentioning. So health resilience, internet in the environment, and health resilience. So from the public health perspective, I wanted to center this discussion by a quick keynote speech on the web and well-being. So how the net can actually enhance and threat the potential well-being and the public health in general in this really interesting era that we are living nowadays. So I will go with a quick introduction and explain some public health and the internet crossplay, some challenges and priority solutions. So introducing myself from start, I’ve been working with AI and I co-founded the Italian Association in Medicine, working for WHO, Young Innovators for Quality of Care group, which was mostly practical, innovating quality of care challenges, and finally working with the World Federation of Public Health Associations. So there is a common thread around health and innovation, which I’m really passionate about. So I will go first, what is the current main activities and sorry, the main of the internet in public health advancement and resilience, I would say. So most of the speaker has already said it and I would like to echo with what they’ve already said, but mostly the internet is announcing, I would say, early warning system for most of the concerns, either climate made, human made or so on. The remote care for the underserved areas and for health communication, either for public health messages, information and education. There is a but, because as many of the speakers have discussed previously, not everyone is addressed. There is a huge still digital divide that is a main concern in the public health field. Indeed, only one third of the global population, only two thirds of the global population are online and have access to internet and 2.6 billion are not achieving actually connection because of the situation they’re living in rural areas. So I would like also to remark the last point, the one on youth, which is the most connected people. So penetration rate of 72% compared to the general population. So as a youth group, I would like to remark that and understand how the youth people can also address the best and use at this best the internet, the web and all the digital tools that we are daily exposed to. There are some problems, as I said before, some challenges, which is again the digital divide, not everyone has access to it. For example, many of the people we are collaborating with in Africa are not always available or have chances to connect to internet. There is a huge problem with the misinformation as we also with COVID, with the vaccination situation, with political situation, as we are seeing as well, and ethical data governance in terms of transparency, accountability of data. Again, with all the neural networks, we cannot most of the time track how the data is going. So this is very interesting to understand. And I will move to the next slide, which is one of my, I will say core slides. I’ve been reading recently this book, which I really recommend is Nexus from Yuval Noah Rari. And there are three core parts that I wanted to share with you today, because they’re really also resonating with the main components of health. First of all, the fact that nowadays we’re not living with passive informative system, but active tools, again, neural networks. So we know we have a re-elaborated source of information, and we don’t know what is all this elaboration going through. Secondly, the algorithms are mostly based on engagement drive. So they’re engagement driven algorithms rather than value driven algorithms. And this is a key point we need to address. This is a main challenge. I personally think so in terms of all the private generation of all the algorithms. And finally, we need to restructure global governance to digital global governance as the UN is doing, for example, with the global digital compact, with the AI governational body. But this is something that needs to be also brought to the national level to enhance the regulation around all these tools. So what we are doing as a group of young WFPCA, the World Federation of Public Health Association, we have some priority actions, including the designing digital tools for equity and inclusivity, keeping this in mind, promote digital governance through frameworks and quality standards, of course, advocating a civil society, as we are not a governmental body. And finally, foster multi-sector collaboration across health, environment, tech, education, and so on. These are some of the achievements we have had. We do lots of education and advocacy around the UNGA, World Health Assembly, and many others, collaborations with the Council of Europe and so on. Again, one of our critical pillars is the capacity building. So allowing people to have digital literacy and giving them the ability to use the digital tools meaningfully, and also finding a way to train either the communities, the population, and the health workers through the digital tools that we advocate for. These are some of the conferences we organized around. If you want to participate, the next one is going to be in September 26th in Cape Town. Thank you, Arko, for registering. And another point I wanted to address is how internet is allowing people to co-create solutions, aggregating communities, and civil societies, and youth people, allowing a better participation to these platforms, let’s say. So I’m about to end. I just want to remark the point of the innovation ecosystem that is our strategy for addressing the current challenges in this internet sustainability current situation. The first point is announcing bottom-up innovation to address these public health challenges, which are big, big challenges and are lacking also of funds, not only of innovation lately. And secondly, structure this kind of solid platform framework that can allow this innovation process to go through all these repeated steps in order to get grounded on the specific implementation, let’s say, country or so. So I’m bringing you two examples. One is the hackathon we’re organizing for July 2nd. In case you would like to participate, this is something that we are moving forward, shaping the future of health with several partners. And then we are launching in late July call for global ideas on social innovation, mostly AI-powered social innovation, to showcase how public health can be advanced through the cutting-edge technology again. So just to remark all these points, we would like to call for an internet that is more solid, that can provide early action, and that can also be a shared resilience worldwide. So we are put at the forefront. This is our take-home message. So youth are at the forefront for the internet and the web, let’s say, digital era. Secondly, innovation is essential in public health. And also we claim for a renewed digital social contract to put people at planet first before profit and the engagement algorithms, let’s call it so. And nothing, I’m sorry for running a bit late, one minute or two. So yeah, if you want to connect with me or with the Young WPK, these are the contacts. Thank you, Joao, for the moderation. And back to you.


Joao Rocha Gomes: Thank you so much, Alessandro. And not just for the thoughts, but also for the actions that the events are organizing, even the book recommendations that you set in here. I would probably ask you later for you to forward the presentation so that we can share these resources and have them statically available. Thank you. And also, as you also mentioned, taking into account that we are already slightly over time, and since we don’t want to overrun the session, I will now bring the word back to the room. I believe we have Amado Spinoza in the room for a shorter round of Amado is a veteran in medical informatics and founder of Medicist, a key figure in digital health reform in Latin America. So, Amado, you have the word now.


J Amado Espinosa L: Thanks, Yao, and thank you very much for the opportunity to participate. Well, right now the main purpose of this digital coalition from IGF is mainly to introduce the community, how can you participate in a multi-stakeholder model in order to integrate all these new trends in technology like AI or quantum computing into the healthcare environment and healthcare services. Right now our focus, as Amali mentioned at the very beginning, is to approach the social predisponent factors of health which are pretty much related to the agentic resources or tools that we do already have from different training models already available at the market in where we are trying to provide the society with the proper resources in order to manage their own well-being and their own health. Then the new trend is right now not only to prevent but also to help the society how to improve their health and to become partners of this healthcare responsibility which is one of the SDGs. I encourage everybody of you from the technical community to join our efforts in order to integrate these environmental medicine variables that are currently measured in different environments into the social determinants of health that are also very well observed and included into these agentic models and also the technical community who are really deeply engaged in the neurological basis of behavior which are already incorporated into this new computing theory in terms of how to really take advantage of AI into the healthcare arena. I thank you everybody for your interest and please join our DC. You can see all our documents already published into the IGF home page and we will be very happy to share with you our ideas initiatives and goals for the next coming five to ten years. Thank you very much.


Joao Rocha Gomes: Thank you so much Amado and I think I’ll do exactly the same which is to add some of these recommendations of events and even actions that the DC is taking on to the session reports where then everyone can follow up and even join if wanted. I will now see in the room I also saw earlier Yao Amevisusu sitting also on the table with you. I will give him the word now. He’s an internet governance advocate and a youth mobilizer from Benin, member of the We the Internet coalition.


Yao Amevi A. Sossou: Thank you very much. Thank you very much Yao. On this topic I want to share perspective on the topic from a research I run last year. Starting from last year until this year in Benin regarding the gap between the promised health and the reality for people who are in need of the health solutions. During my research I conducted in Benin I found out the actually gap between the need of the people and the proposed appropriate solution that are made available to them on a daily basis. Let’s take two example of a young 18 year old student who usually rely on self-medications when he’s sick or go to a local pharmacy because of long waiting time at the hospitals or maybe because of loss of trust in the medical professionals and they seem to look for this care for them is like a last resort so they don’t trust them. And let’s take a case of a father who is also a social worker who have less income who views this financial medical taking care of their family as a big financial hurdles. So the first reflex in this case is to look for traditional medications and instead of going to the formal health care system which they found very expensive for them and these are not just unique stories. They represent the daily reality of countless families that I met during my research and for them resilience is about survival, is about also treatable illness like malaria when the cost is will be bearable for them. For most of them the cost of those treatment is almost a monthly income, a monthly salary for the whole family. So those are issues I encountered and also saw there were potential medical help available on the ground but there were lack of awareness about those solutions and those solutions when even on the ground the professionals they are not using the solution available and basically most of it what I found out is that most of the patients I interviewed during the research they didn’t trust on the solutions and even the doctors those that use the platform they confirmed that none of their patients are using the app and basically most of the barriers is deeply rooted on the lack of the human aspect of it. There is a profound lack of awareness and deeply seated mistrust in the health care system extending to it on the digital tools also and secondly there is a accessibility gap in the app where most of the people I interviewed are not really proficient in the official language French that they use so there’s a barrier also in terms of lack of a low literacy level. Thirdly affordability remains a big challenge, a big hurdle and this kind of app can’t help people if they are not really offering alternative solutions to the end user to have affordable health care so we must design with the community in mind. The people we are designing for we may design with them and co-create user-centered solutions that are culturally relevant and integrate local languages and local solutions for them. We must build a trust alongside digital platform and that they need to be integrated with the public health education and awareness campaign to show the value and reliability on just solutions. We must also make sure that the solutions are affordable especially during time of crisis and the cost to access those so the digital health solution made available must be very flexible in terms of payment and also make sure that they offer insurance opportunity to the end users on the ground. So those are the input I wanted to bring on board to this topic. Thank you very much. Heel back to you Joao.


Joao Rocha Gomes: Thank you so much Yael for giving us direct insights from the source as well on what are the issues that you see and sending some potential solutions or directions even so thank you once again. I will now give the word to Uda Kihi online with a senior researcher in wireless and green communication from Tunisia and PhD in telecommunications. I believe you will have slides to share with us so also feel free to go ahead and share the screen. Thank you so much


Houda Chihi: Joao. Could you hear me? Yes. Okay thank you. Hello everyone. Thank you so much Joao for this great introduction. I want also to thank all the participants for your attending our session and let me share my screen. Hope that it will be visible. Okay. Is it visible? Is it okay? Just I put it more bigger. I put it much bigger. Okay so thank you so much. So my today’s talk will be about cyber security for environment sustainability. Let’s start by the roadmap of my talk. I will start by the context and the challenges to understand more about the interests of this synergy between sustainability and cyber security. After that I will explain more the principle of this synergy together with challenges, best practices and I will sum up of my talk about the key address and key point of my talk. So what are the challenges? So today’s and nowadays due to rise of the problem of climate change we are obliged to exploit ICT for measuring green gas emission. So we speak about data sovereignty and ICT for a CO2 emission and measurement and we speak about new metrics for measuring the energy conceptions. So let’s say that ICT is integrated and it is based in data collection of energy. But here we have a threat of attacks, the rise of attacks in energy platforms if there is a lack of cyber security tools. So another challenge is the rise of the use of, we speak nowadays of the revolution of artificial intelligence and machine learning. So we speak about misinformation, fake profiles and we have also a great emerging of a new generative of artificial intelligence. So there is a threat of cyber security regarding the climate change. We will have different outcomes if there is a possibility of intrusion of any energy platforms or any energy algorithms. So this is another challenge is related to data energy theft, which is a risk of reputation of companies and sustainability in general. OK, so let’s understand a bit what is a cyber sustainability principle that we will explain in my talk today. So it is the combination between cyber security and other principles related to protection of society, environment and governance. Now we speak about a new protection of the planet and environment, which leads if it is done in an efficient way, we will speak about sustainability. And with the introduction of cyber security recommendation, we will have another principle, which is cyber sustainability. So what is that principle about? It is about the combination between the different practices of cyber security together with carbon footprint minimization or energy consumption minimization. So in this way, we will speak about a balance or tradeoff between sustainable development goals, satisfaction, which leads to sustainability together with cyber security practices. So in this way, we will have an alignment between cyber security and green tech. So it is a way of redirection of tech for both cyber security, our security of data protection, privacy, human rights protection, together with planet protection, environment, sustainability. So here we speak about a new concept, which is based in protection, which is based on specific policies to speak about cyber sustainability. We have different pillars to respect in the way to protect the environment together with data protection or human rights protection too. It is based on specific policies and recommendations. It is a redirection for people protection, whether it is related to planet, environment, health, health care or human rights in terms of data protection and information protection and tech solution, which will be green together with respect of cyber security, which leads to protection of human rights. And we speak about the green policies redirected or some tech for a human. So here it is a way of redirection for cyber security, for good, for both sustainability together with human rights protection. Simple practices for energy, we can use, for example, Internet of Things or sensors for energy monitoring and security. Best practices is to monitor the functionality and operability of these sensors in a safe way. Also, another recommendation is related to redirection or empowering research labs to focus more in this synergy between sustainability together with a security intersection. Here, we need a collaboration between different stakeholders and a mindset shift together with capacity building in both green practices and cyber security, which calls all developers, academia, environmental experts. We need all of them to sit in the table and collaborate and to state specific rules that are redirected for the benefit of the planet and human rights at the same time. So to not waste a lot of time, let’s go to the best practices and tips for sustainable cyber security. It’s in general based in three pillars. It is a specific environment responsibility. It is a social ethics. It is a technology resilience. It means that it is a redirection for technology for both green practices and together with respect and control of a specific ethics to protect the human rights and planet sustainability at the same time. Okay, let’s speak now more precisely in the impact of artificial intelligence because nowadays, artificial intelligence and machine learning are integrated everywhere, whether in cyber security or sustainability. But if we let them running with different kinds of data, we’ll have a bad outcome or we’ll have a different outcome that will be a threat of our lives and a threat of the planet. But here, it should be the use of artificial intelligence algorithm should be based in a specific quality of data sets together with if we conclude or remark of any threat or risk of bias or discrimination of specific information or bad outcome. We’ll have to do a specific audits and tests and adding the specific data that we need that lead us to have a good outcome. So, another good practices is based in the concept of the combination between the safety concept, sustainability together with cyber security, which is based in first of all, we need to do to have a simple practices such as data backup to not in case of we have any threat or risk. We don’t have a waste of all of data, but if we keep them and do the necessary backups and storage, there is no waste for us. There is no threat of reputation. We lost our data and repeat again the collection and waste the data and costs.


Joao Rocha Gomes: Excuse me, we have to move forward.


Houda Chihi: Okay, okay. So, let’s go to these facts. Okay. So, another important thing is to have inspiration from great companies and regulators or standards that are dealing with the problem of cyber security and convince them to state specific rules together for the benefit of the planet such as NIST, ESO, European acts.


Jorn Erbguth: Okay. Thank you very much for your ideas. Yes, I appreciate very much. Joao, please move ahead.


Joao Rocha Gomes: Thank you so much for gatekeeping the time. I always feel like a bad person every time I interrupt someone. So, thank you for doing that role. We definitely need it because our time is short. I will now briefly introduce Frederic as well. Be mindful of the time and please keep the intervention short. Frederic Cohen is a data-driven governance advocate with expertise in IGF and WSIS. So, Frederic, I’ll give you the word now.


Frederic Cohen: Hello, everyone, dear friends, members, and to all the participants. I would like to thank you all for the opportunity to express our consideration for this summit and to all the organizers. This month is a moment of meeting and exchanges as it was promoted by Dessa Boyce on a newsletter with the visit of the Under-Secretary-General Lee, Jr. in France for the Third Ocean Conference, also the Fourth International Conference on Financing for Development in Sevilla, and this IGF Summit in Norway. The topic of water is a major issue to protect the planet, dealing with health and sanitation to the people and that engages in combat pollution everywhere. Ecology is impacted by the interaction of humanity with the environment, and mass transfers of an industrialized population make life in danger. It is a forum of openness and inclusivity that is proposed for the international community to assist and support the decision-making that apply for the global economy. Consultation of population at a large scale of calculation is a manner to examine statistics and manage with a solution. The venue is a way to present personalities around the world and transmit information about their will of communication. It was expected to make progress into recognition. This event, that will make date, must be the occasion to remind the work done with the MAG to advance knowledge for prevention and application of the guidelines exposed with the Internet. Publications from UNDESA are an important reference for the discussion. It is supposed that every meeting should be founded by a major vote of the community, but also investment in the private sector and the engagement of member states together. We must demonstrate our commitment to improving communication for global affairs. As major goals for the nation, health, education and climate change are often reminded for the global market as topical investments to develop a fair economy. It is a sector of partnership with philanthropy, where participants can be proud to offer their volunteering. A transparent regulatory framework is needed to define an ambitious achievement in this matter. The future talk will notice the dynamic coalition on data-driven health technologies for this contribution to global effort, and it will be an important interest for the followers


Joao Rocha Gomes: of the initiative. I thank you very much. Thank you so much Frédéric, also for gatekeeping the time. And I will very quickly now bridge into discussion. I’ll just share a medical perspective. I also have some slides on the topic, but I’m also mindful of the time. I will invite everyone to leave their questions already in the chat if you have any. And I will share my screen, and I hope you can see the slides as well. Yeah, perfect. So just a very brief note on something that we are already aware of. Environmental hazards definitely do lead to health emergencies. We have reports stating that both air pollution and water contamination, just as examples, can have obvious implications on health outcomes. And surprisingly, this is a data point that I thought it was interesting to share, that 99% of the world’s population is living in places where the air quality guidelines are not met, the ones set by WHO. And this obviously is a burden. It can lead to conditions such as asthma, stroke, cardiovascular diseases, even cancer. And for water contamination, while we may think that we are already having good results, and we do have 73% of the population having good coverage in a safely and manageable drinking water access, it also means that the other 27% do not have so. And this may lead to diseases. It can be many types of gastroinfections, but also even cancers, for example, in the case of arsenic contamination in waters. And I wanted to bring some examples in here of initiatives that are ongoing or that happened recently that try to focus on these aspects. One of them is Breezometer, which is a mapping initiative that addresses the air quality in real time. And many applications that we currently use already consume the information provided by these tools. So we can see what are the environments where the air quality is better or worse. Then we also have the REACH project in Bangladesh, which would provide alerts to the population based on the quality and potential contamination of the water they were consuming in public wells and also in open systems. And then the SORMAS project in Nigeria and Ghana that was established and that is currently still working, trying to focus on mapping outbreaks of conditions, of diseases, and trying also to provide a response system that works in real time to alert populations, but also healthcare providers. And we know that vulnerability, as we saw based on this project, does not necessarily mean connectivity. So my one advocacy point in this intervention is for us to address the aspects of internet access, digital literacy, and inclusion in the system design, which are often lacking. So equity must be a design principle and not just an afterthought of these tools. And therefore, I really do think that we should pick up on some of these projects and leverage them into policy aligned interventions so that we can replicate what was well done, but also that we learn from the projects that didn’t work so well and try to bridge the gap or to at least solve some of the issues that were raised or found in many of these projects that leads to them not continuing over a bigger lifespan than what the funding allocates for. And without further ado, I would like to then invite questions from the audience. So I know that in the room you may have people that have questions. I believe you can raise the end and a microphone will be provided to you. And I will also look into the online chats to see if there’s any questions popping up. Please state your name and affiliation just for context before any intervention. Even if you don’t have questions, share your ideas and comments. Thank you so much.


Jorn Erbguth: Do we have some questions in the room? Please show hand. Yes. There’s a mic here. You can come. Yeah, please come here to the microphone and speak.


Audience: And please state your name and affiliation. I’m Marcelo Fornasin. I’m from Oswaldo Cruz Foundation Brazil, researcher in public health. Thank you for the congratulations for the interesting interventions we have here. I’d like to ask you about this. When we link the discussion between making the links between health and environment, we see new source of data for environmental health reproduced through Internet of Things. And I’d like to hear from you what you think about the validity of this data. How can we produce and use and evaluate the quality of this data produced by several devices for monitoring environment? And how can we use this data to provide the epidemiological surveillance in healthcare scenarios?


J Amado Espinosa L: Yes. May I? Yes, that’s an important question. And I think we have to double check on the feasibility of this data. Nowadays, as our colleague mentioned, the IoT is specifically dedicated to healthcare is almost in most of the countries already linked to the public health policies. And what we are realizing is it is not only to take care of the public health problems that are already available in place, as you already mentioned, but also how can we really help the population to improve their health? That means if they are having a fitness program, they are trying to compensate chronic diseases. And the environmental variables can play a role in order for them to help to improve this status. It’s very important to integrate those kind of values or those kind of information into their personal health record and provide them with personalized recommendations, personalized guidelines, which through the use of AI and those agentic resources I already mentioned to you are already available in certain applications. The most important step to equate right now is how can we define these guidelines in our own environment? Because it is not the same value which is here, for example, this beautiful country as it is in Mexico or in Panama or in another country, in another region. Then our recommendation is, of course, to join the efforts from the World Health Organization in order to have this data regionalized but included into the different platforms that are already available from the different regions and that we can really provide a personalized recommendation. Thanks.


Jorn Erbguth: You asked about the validity and, of course, this really depends on the type of data. So, if you have measurements about water qualities and, of course, if you have a trust laboratory, this data is quite valid. If you have indirect measurements by IoT devices, often you have to make a lot of assumptions about causes, about measurements that you can not directly measure, but that you can kind of get from direct measurements. Those are based on assumptions and, of course, they run the risk of including certain errors or even political misconceptions.


Yao Amevi A. Sossou: In that front, I would just want to add regarding how we can use those IoT solutions. I think in terms of trust on the solution, I think we need to come up with a unified mechanism. We are working on a mechanism of policies so that the data collected and how they are collected are done in the same way, to build trust on those data. And depending on the regions in the world, there are different methods used, but all in all, coming with unified regulations, regulatory measures, of course, to how we collect the data and scientifically proving the validity of those data, I think that will help build trust on the data, and then also we need to be able to replicate those methods and methodologies. Thank you.


Jorn Erbguth: Do we have comments from the online speakers on this question?


Houda Chihi: Houda has her hand raised. Yes. Okay, thank you so much for this question. So, I want to just add that the source of data collection is very important, and the next thing is the testing phase is very important too, because if we deploy any solution or directed for healthcare, an important thing is to test it before and to judge the outcome. After that, we can commercialize it or we can decline it or just adjust it or retrain and collect new data. And this is a collaborative effort between technical team and medical staff. Thank you.


Jorn Erbguth: Thank you. Are there any further comments from the experts? Otherwise, are there further questions from the room? Do we have questions online, Karel?


Joao Rocha Gomes: Yes, we do have a question online. I can read it out loud, and then I’ll let you add your thoughts on it. The question comes from a representative of IGF Ghana, and it reads, how can nurses balance the benefits of AI-driven healthcare with the essential needs for human empathy and compassionate patient care? I would even say beyond nurses, healthcare professionals as well.


June Parris: Can I say a few words? It’s up to the individual. As a healthcare professional, you should put your job first, but you should care about the patient and empathize and know something about what you’re talking about. The important thing in healthcare, too, is education. We have to have expert patients. The more expert a patient is, the easier the job is. Everything that we’ve said today, the environment, healthcare, internet, natural disasters, it all comes down to one thing, education. The healthcare professional, in particular, that they need to be dedicated to the job and do a good job.


Yao Amevi A. Sossou: I can also add to what you said, Jun, that especially the healthcare professionals, they need to educate themselves on the use of those emerging technologies available. But in their practices, they need to make sure they create more empathy with the patient visiting them. During my research I’ve done in Benin, I noticed that most of the trust issues come from the way they are dealing with the patient, and this creates a bridge of trust between them and their patients. So, creating empathy with the patient, but also educating themselves in how to combine those technologies into their daily practices. So, we are quite effective, I think. Thank you.


Houda Chihi: We have a hand from Huda. Would you like to… Yes. I want to say that any technology, whether it is based in AI or other things, is here to complement and to help us. But empathy is always the first thing that we should provide to any patient to let him accept any medical tool to help him recover very fast. Thank you.


Jorn Erbguth: Thank you. Do we have a further question from the room? I don’t see any from the room. How do… I’m not sure. We don’t have any. Maybe I add to the last question. Studies have shown that people tend to see more empathy in AI than in human doctors. Human doctors are often stressed under time pressure, and sometimes they don’t act with the empathy we would like them to act with. Of course, empathy is not just using the right words, but it’s a lot more. With AI, this is limited to the right words right now. When I look at doctors, I see a lot of doctors that have an issue with this kind of semi-educated patients that have used Google or GPT for their problems and question the authority of the doctor while not really understanding the issue. At the same time, AI can help to provide further explanation to patients. When a patient gets a diagnosis and they have a hard time understanding what it means, it could be provided to answer further questions that they have maybe after the doctor is gone, and they would need further answers, which cannot be given by the health system currently easily.


Joao Rocha Gomes: I will also maybe add a short 30-second point, also aware of the time on this note, which is the fact that empathy is often therapeutic. If we think about diseases or conditions that do not have a treatment, that aren’t curable, empathy is often the most important part of care. But we can also look at this from the perspective of, but when is it curable? Should we also spend the time and effort, taking into account that we have limited resources, empathizing with the patients or focusing on care? Obviously, the answer should be both. Not always that is possible. And healthcare results, even though they depend on both, above anything else, they depend on good results and good treatments for the patients. Empathy should always be part of it. And as you said, technology is here to help us leverage that part of care, potentially even a replacement in the future. I wouldn’t say that now that is possible. As you mentioned, it’s just words, not actions. And people still know that there’s not a human behind the machine. And that still counts, even if indirectly. But I would say that it’s still relevant. Thank you for the time as well. Maybe we can wrap up very soon.


Jorn Erbguth: We have to wrap up. Thank you. And this was already kind of a final comment from you. Thank you for your excellent moderation, Schrau. Thank you for all that organized this session. Thank you for Amali, who was unfortunately not able to come, and who is kind of the driving force between this dynamic coalition. Thank you for all the participants here on site, online. Thank you for attending, for your interesting questions, for your interests. And we know technology, Internet, AI, data, provide a lot of opportunities to improve healthcare, but also they come with a lot of risks that we have to tackle, and that we have to see how we can manage them in order to keep the risk low and the benefit high when it comes to healthcare and environment and how to improve resilience. So thanks a lot. And we will be at WSIS. So if you’re interested in the topic, please join us again at WSIS in two weeks. Thank you. ♪♪♪ ♪♪♪ ♪♪♪ ♪♪♪ Workshop 2. ♪♪♪ Workshop 2. ♪♪♪


J

June Parris

Speech speed

131 words per minute

Speech length

685 words

Speech time

313 seconds

Caribbean countries face economic barriers to keeping up with developed nations’ health technology

Explanation

Caribbean nations struggle to maintain pace with technological advances in healthcare due to limited financial resources. They must rely heavily on external expertise and funding from Europe and North America, which creates dependency and limits their ability to implement sustainable solutions.


Evidence

Personal experience working in UK healthcare systems versus returning to Barbados where systems were not as advanced; reliance on overseas experts and funding; natural disasters causing setbacks that require repeated investment


Major discussion point

Digital divide and resource allocation challenges in developing regions


Topics

Development | Economic


Agreed with

– Alessandro Berioni
– Henrietta Ampofo
– Yao Amevi A. Sossou

Agreed on

Digital divide creates significant barriers to healthcare access


Disagreed with

– Yao Amevi A. Sossou
– Alessandro Berioni

Disagreed on

Primary barriers to digital health adoption


Cost of internet access and maintenance creates barriers in developing regions

Explanation

The high cost of internet infrastructure, access, and ongoing maintenance presents significant obstacles for healthcare digitization in developing countries. These financial barriers prevent effective utilization of digital health technologies even when they are available.


Evidence

Basic internet access costs, maintenance of internet systems, and limited resources in island economies


Major discussion point

Digital access and affordability challenges


Topics

Development | Infrastructure


Agreed with

– Alessandro Berioni
– Henrietta Ampofo
– Yao Amevi A. Sossou

Agreed on

Digital divide creates significant barriers to healthcare access


A

Alessandro Berioni

Speech speed

161 words per minute

Speech length

1334 words

Speech time

495 seconds

Only two-thirds of global population have internet access, with 2.6 billion lacking connection

Explanation

There exists a significant global digital divide where approximately one-third of the world’s population lacks internet connectivity. This gap particularly affects rural areas and limits the potential for digital health interventions to reach those who need them most.


Evidence

Statistical data showing 2.6 billion people without internet connection, with rural areas being most affected


Major discussion point

Global digital divide and connectivity challenges


Topics

Development | Infrastructure


Agreed with

– June Parris
– Henrietta Ampofo
– Yao Amevi A. Sossou

Agreed on

Digital divide creates significant barriers to healthcare access


Disagreed with

– June Parris
– Yao Amevi A. Sossou

Disagreed on

Primary barriers to digital health adoption


Internet enables early warning systems for climate-related health threats

Explanation

Digital technologies and internet connectivity provide crucial infrastructure for monitoring and alerting populations about environmental health risks. These systems can help predict and respond to climate-related health emergencies before they become widespread.


Evidence

Examples of surveillance systems and remote care capabilities for underserved areas


Major discussion point

Technology’s role in health resilience and environmental monitoring


Topics

Development | Infrastructure


Agreed with

– Henrietta Ampofo
– Houda Chihi
– Joao Rocha Gomes

Agreed on

Internet enables environmental and health monitoring systems


Algorithms are engagement-driven rather than value-driven, creating challenges

Explanation

Current AI and algorithmic systems prioritize user engagement over beneficial health outcomes or values. This creates risks in healthcare applications where profit-driven engagement metrics may not align with patient wellbeing or equitable care delivery.


Evidence

Reference to Yuval Noah Harari’s book ‘Nexus’ discussing neural networks and algorithmic decision-making


Major discussion point

AI governance and ethical considerations in healthcare


Topics

Legal and regulatory | Human rights


Youth are at forefront of internet adoption and should lead innovation

Explanation

Young people have the highest internet penetration rates and are most connected to digital technologies. This positions them as key stakeholders who should be empowered to lead digital health innovation and policy development.


Evidence

Youth internet penetration rate of 72% compared to general population


Major discussion point

Youth engagement in digital health innovation


Topics

Development | Sociocultural


Technology enables remote care for underserved areas

Explanation

Internet and digital technologies provide opportunities to deliver healthcare services to populations that lack access to traditional healthcare infrastructure. This is particularly important for rural and resource-limited settings.


Evidence

Examples of telemedicine and remote diagnostic capabilities


Major discussion point

Digital health access and equity


Topics

Development | Infrastructure


Ethical data governance requires transparency and accountability

Explanation

Proper governance of health data requires clear frameworks that ensure transparency in how data is collected, used, and shared. Accountability mechanisms are essential to prevent misuse and protect patient rights.


Evidence

Discussion of neural networks where data processing cannot be tracked


Major discussion point

Data governance and privacy in digital health


Topics

Legal and regulatory | Human rights


Y

Yao Amevi A. Sossou

Speech speed

137 words per minute

Speech length

878 words

Speech time

384 seconds

Accessibility gaps exist due to language barriers and low literacy levels

Explanation

Digital health solutions often fail to reach intended users because they are not designed in local languages or appropriate literacy levels. This creates barriers for populations who cannot effectively use applications designed in official languages they are not proficient in.


Evidence

Research findings from Benin showing patients couldn’t use French-language health apps due to language barriers


Major discussion point

Cultural and linguistic barriers to digital health adoption


Topics

Development | Sociocultural


Agreed with

– June Parris
– Alessandro Berioni
– Henrietta Ampofo

Agreed on

Digital divide creates significant barriers to healthcare access


Patients often prefer self-medication over formal healthcare due to mistrust

Explanation

Many patients choose self-medication or traditional remedies instead of seeking formal healthcare due to lack of trust in medical professionals and systems. This mistrust extends to digital health solutions and represents a significant barrier to adoption.


Evidence

Case studies from Benin research including 18-year-old student using self-medication and families viewing formal healthcare as last resort


Major discussion point

Trust and cultural barriers in healthcare systems


Topics

Sociocultural | Human rights


Disagreed with

– June Parris
– Alessandro Berioni

Disagreed on

Primary barriers to digital health adoption


Lack of awareness about available digital health solutions

Explanation

Even when digital health tools are available, many potential users are unaware of their existence or benefits. This awareness gap prevents effective utilization of existing resources and limits the impact of digital health interventions.


Evidence

Research findings showing patients unaware of available medical help and doctors confirming patients don’t use available apps


Major discussion point

Health communication and awareness challenges


Topics

Development | Sociocultural


Need for culturally relevant and locally integrated solutions

Explanation

Digital health solutions must be designed with community input and cultural considerations to be effective. Co-creation with target populations ensures solutions are relevant, accessible, and trusted by the communities they serve.


Evidence

Recommendations based on research findings about designing with communities in mind and integrating local languages


Major discussion point

Community-centered design in digital health


Topics

Development | Sociocultural


Agreed with

– Houda Chihi
– J Amado Espinosa L

Agreed on

Need for culturally appropriate and locally integrated digital health solutions


Healthcare professionals must create empathy with patients to build trust

Explanation

Building trust between healthcare providers and patients requires genuine empathy and improved communication. This human connection is essential for patients to accept both traditional and digital health interventions.


Evidence

Research findings from Benin showing trust issues stemming from poor patient-provider interactions


Major discussion point

Human-centered care and trust building


Topics

Human rights | Sociocultural


Agreed with

– June Parris
– Houda Chihi

Agreed on

Technology should complement rather than replace human empathy in healthcare


Disagreed with

– Jorn Erbguth
– June Parris
– Houda Chihi

Disagreed on

Role of AI in healthcare empathy and patient care


Financial barriers make healthcare unaffordable for many families

Explanation

The cost of healthcare treatment often represents a significant portion of family income, making it unaffordable for many. This economic barrier forces families to seek alternative, potentially less effective treatments.


Evidence

Case study of social worker father viewing medical care as financial hurdle; treatment costs equivalent to monthly family income


Major discussion point

Healthcare affordability and economic barriers


Topics

Economic | Development


Need for affordable and flexible payment solutions in digital health

Explanation

Digital health solutions must incorporate flexible payment mechanisms and insurance opportunities to be accessible to low-income populations. Affordability during crisis periods is particularly important for ensuring continued access to care.


Evidence

Recommendations for flexible payment systems and insurance opportunities based on research findings


Major discussion point

Financial accessibility in digital health


Topics

Economic | Development


Need for unified regulatory mechanisms for data collection validity

Explanation

Establishing standardized policies and regulations for data collection across regions is essential for building trust in digital health solutions. Unified approaches ensure data validity and enable replication of successful methodologies.


Evidence

Discussion of different methods used across regions and need for scientifically proven validation


Major discussion point

Data standardization and regulatory frameworks


Topics

Legal and regulatory | Development


Need for healthcare professionals to educate themselves on emerging technologies

Explanation

Healthcare providers must continuously update their knowledge about new digital technologies to effectively integrate them into practice. This education is essential for combining technological capabilities with empathetic patient care.


Evidence

Observations from research about healthcare professionals needing to adapt to new technologies


Major discussion point

Professional development and technology adoption


Topics

Development | Sociocultural


Agreed with

– June Parris
– Houda Chihi

Agreed on

Technology should complement rather than replace human empathy in healthcare


H

Henrietta Ampofo

Speech speed

163 words per minute

Speech length

495 words

Speech time

181 seconds

Rural areas particularly affected by connectivity issues in Africa

Explanation

African regions face significant challenges with internet connectivity, particularly in rural areas. These connectivity issues limit access to digital health solutions and create barriers to implementing technology-based health interventions.


Evidence

Personal experience speaking from Dakar about connectivity challenges across different parts of Africa


Major discussion point

Infrastructure challenges in developing regions


Topics

Development | Infrastructure


Agreed with

– June Parris
– Alessandro Berioni
– Yao Amevi A. Sossou

Agreed on

Digital divide creates significant barriers to healthcare access


Malaria modeling using climate data helps predict disease patterns and allocate resources

Explanation

Digital technologies enable sophisticated modeling of disease patterns by integrating climate and environmental data. This predictive capability allows for better resource allocation and targeted interventions, particularly for vector-borne diseases like malaria.


Evidence

Example from AMNET conference on Applied Malaria Modelling sponsored by Gates Foundation; use of datasets and cloud computing for disease prediction


Major discussion point

Data-driven disease surveillance and prediction


Topics

Development | Infrastructure


Agreed with

– Alessandro Berioni
– Houda Chihi
– Joao Rocha Gomes

Agreed on

Internet enables environmental and health monitoring systems


J

Jorn Erbguth

Speech speed

114 words per minute

Speech length

1114 words

Speech time

583 seconds

Healthcare systems vulnerable to cyber attacks due to network dependence

Explanation

While internet connectivity enhances healthcare resilience, it also creates new vulnerabilities to cyber attacks. Hospitals and healthcare systems become targets for malicious actors, and network outages can immediately disrupt critical clinical decisions and operations.


Evidence

Examples of telemedicine, remote diagnostics, robot-assisted surgery during COVID; disruption to clinical decisions, supply chains, and payment systems during outages


Major discussion point

Cybersecurity risks in digital healthcare


Topics

Cybersecurity | Infrastructure


AI can optimize care but also enable government abuse in healthcare rationing

Explanation

Artificial intelligence has the potential to improve healthcare delivery and optimize treatment decisions. However, it also creates risks for governments or organizations to use AI systems to discriminate in healthcare access, quietly determining who receives expensive treatments based on potentially biased criteria.


Evidence

Example of funding disparities between breast cancer and prostate cancer research despite similar incidence and mortality rates


Major discussion point

AI governance and potential for discrimination in healthcare


Topics

Legal and regulatory | Human rights


Funding disparities exist between similar diseases like breast vs prostate cancer

Explanation

Healthcare funding allocation often reflects hidden biases rather than objective medical need. Data and AI systems can perpetuate or even amplify these inequities by making biased funding decisions appear neutral and evidence-based.


Evidence

Statistical comparison showing breast cancer and prostate cancer have similar incidence and mortality rates, but breast cancer receives nearly double the government funding and much more philanthropic funding


Major discussion point

Health equity and resource allocation bias


Topics

Economic | Human rights


J

Jason Millar

Speech speed

119 words per minute

Speech length

599 words

Speech time

301 seconds

External funding comes with constraints that may limit full problem resolution

Explanation

While external aid is crucial for Caribbean nations, funding agencies impose terms and conditions that may prevent comprehensive solutions to local problems. These constraints can limit the effectiveness of interventions and prevent addressing root causes of health and environmental issues.


Evidence

Examples of dependency on external resources from funding agencies with limiting terms and conditions; challenges from Sahara dust, sargassum seaweed, Hurricane Beryl damage to fishing fleet


Major discussion point

Aid dependency and sovereignty in health interventions


Topics

Economic | Development


H

Houda Chihi

Speech speed

119 words per minute

Speech length

1302 words

Speech time

655 seconds

IoT devices and sensors can monitor environmental factors affecting health

Explanation

Internet of Things technology provides opportunities to continuously monitor environmental conditions that impact public health. These sensors can collect data on air quality, water contamination, and other environmental hazards to support early warning systems.


Evidence

Discussion of sensors for energy monitoring and security; importance of monitoring functionality and operability of sensors


Major discussion point

Environmental monitoring through connected devices


Topics

Infrastructure | Development


Agreed with

– Alessandro Berioni
– Henrietta Ampofo
– Joao Rocha Gomes

Agreed on

Internet enables environmental and health monitoring systems


Cyber sustainability combines security practices with environmental protection

Explanation

A new approach to technology governance that integrates cybersecurity measures with environmental sustainability goals. This framework aims to protect both digital systems and the planet through aligned policies and practices.


Evidence

Definition of cyber sustainability as combination of cybersecurity with carbon footprint minimization; discussion of protection pillars for people, planet, and data


Major discussion point

Integrated approach to digital and environmental governance


Topics

Cybersecurity | Development


Importance of testing and validation before deploying healthcare solutions

Explanation

Digital health solutions require rigorous testing and outcome evaluation before implementation. This validation process should involve collaboration between technical teams and medical staff to ensure solutions are safe and effective.


Evidence

Emphasis on testing phase importance and judging outcomes before commercialization; need for collaborative effort between technical and medical teams


Major discussion point

Quality assurance in digital health deployment


Topics

Legal and regulatory | Development


Collaboration needed between technical community, academia, and environmental experts

Explanation

Addressing complex health and environmental challenges requires multi-disciplinary collaboration. Different stakeholders must work together to develop comprehensive solutions that address both technical and environmental aspects of health resilience.


Evidence

Call for collaboration between developers, academia, environmental experts to establish rules for planet and human rights protection


Major discussion point

Multi-stakeholder collaboration in health technology


Topics

Development | Sociocultural


Agreed with

– Yao Amevi A. Sossou
– J Amado Espinosa L

Agreed on

Need for culturally appropriate and locally integrated digital health solutions


AI should complement human care while maintaining empathy

Explanation

Artificial intelligence and other technologies should be designed to support rather than replace human healthcare providers. Empathy remains a crucial component of patient care that must be preserved alongside technological advancement.


Evidence

Statement that technology is here to complement and help, but empathy is always the first thing to provide to patients


Major discussion point

Human-AI collaboration in healthcare


Topics

Human rights | Sociocultural


Agreed with

– June Parris
– Yao Amevi A. Sossou

Agreed on

Technology should complement rather than replace human empathy in healthcare


Disagreed with

– Jorn Erbguth
– June Parris
– Yao Amevi A. Sossou

Disagreed on

Role of AI in healthcare empathy and patient care


J

J Amado Espinosa L

Speech speed

108 words per minute

Speech length

568 words

Speech time

313 seconds

Environmental variables should be integrated into personal health records

Explanation

Personal health records should incorporate environmental data to provide more comprehensive and personalized healthcare recommendations. This integration enables AI-powered systems to consider environmental factors when providing health guidance and treatment recommendations.


Evidence

Discussion of IoT healthcare integration with public health policies; mention of fitness programs and chronic disease management with environmental considerations


Major discussion point

Personalized medicine incorporating environmental data


Topics

Development | Legal and regulatory


Agreed with

– Yao Amevi A. Sossou
– Houda Chihi

Agreed on

Need for culturally appropriate and locally integrated digital health solutions


Social determinants of health must be integrated into agentic AI models

Explanation

AI systems used in healthcare should incorporate social determinants of health to provide more equitable and effective care. This integration helps address broader factors that influence health outcomes beyond just medical conditions.


Evidence

Discussion of agentic resources and tools for managing well-being; focus on social determinant factors and environmental medicine variables


Major discussion point

Holistic AI approaches to health and wellbeing


Topics

Legal and regulatory | Human rights


A

Amali De Silva-Mitchell

Speech speed

118 words per minute

Speech length

183 words

Speech time

93 seconds

WHO mandates health as integral part of climate change issues

Explanation

The World Health Organization has established that health considerations must be central to climate change policy and response. This mandate requires viewing health and environmental challenges as interconnected rather than separate issues.


Evidence

Reference to WHO mandate on health and climate change integration


Major discussion point

Health-climate policy integration


Topics

Legal and regulatory | Development


Need for integrated policymaking across health, environment, and technology sectors

Explanation

Effective governance requires coordinated policymaking that considers health, environmental, and technological factors together. This integrated approach should encompass the entire ecosystem of services including public safety, emergency services, healthcare providers, and citizens.


Evidence

Call for thinking globally and integrated in policy decisions; mention of ecosystem including public safety, emergency, ambulance, hospitals, doctors and citizens


Major discussion point

Integrated governance frameworks


Topics

Legal and regulatory | Development


J

Joao Rocha Gomes

Speech speed

174 words per minute

Speech length

1944 words

Speech time

668 seconds

99% of world’s population lives in areas not meeting WHO air quality guidelines

Explanation

Air pollution represents a nearly universal health threat, with the vast majority of the global population exposed to air quality that fails to meet World Health Organization standards. This widespread exposure leads to various health conditions including respiratory diseases, cardiovascular problems, and cancer.


Evidence

Statistical data showing 99% of population in areas not meeting WHO air quality guidelines; health impacts including asthma, stroke, cardiovascular diseases, and cancer


Major discussion point

Global environmental health crisis


Topics

Development | Human rights


Agreed with

– Alessandro Berioni
– Henrietta Ampofo
– Houda Chihi

Agreed on

Internet enables environmental and health monitoring systems


F

Frederic Cohen

Speech speed

123 words per minute

Speech length

374 words

Speech time

182 seconds

Transparent regulatory frameworks needed for digital health initiatives

Explanation

Effective digital health governance requires clear, transparent regulatory frameworks that can guide decision-making and ensure accountability. These frameworks should support both public and private sector engagement while protecting public interests.


Evidence

Discussion of need for transparent regulatory framework for ambitious achievement; mention of private sector investment and member state engagement


Major discussion point

Regulatory transparency in digital health governance


Topics

Legal and regulatory | Economic


A

Audience

Speech speed

120 words per minute

Speech length

117 words

Speech time

58 seconds

Need to validate data quality from IoT devices for environmental health monitoring

Explanation

There are concerns about the validity and reliability of data produced by various IoT devices used for environmental monitoring in healthcare scenarios. The question addresses how to evaluate and ensure the quality of this data for use in epidemiological surveillance and healthcare decision-making.


Evidence

Question about validity of data from IoT devices for monitoring environment and use in epidemiological surveillance


Major discussion point

Data quality and validation in environmental health monitoring


Topics

Infrastructure | Legal and regulatory


Agreements

Agreement points

Digital divide creates significant barriers to healthcare access

Speakers

– June Parris
– Alessandro Berioni
– Henrietta Ampofo
– Yao Amevi A. Sossou

Arguments

Caribbean countries face economic barriers to keeping up with developed nations’ health technology


Cost of internet access and maintenance creates barriers in developing regions


Only two-thirds of global population have internet access, with 2.6 billion lacking connection


Rural areas particularly affected by connectivity issues in Africa


Accessibility gaps exist due to language barriers and low literacy levels


Summary

Multiple speakers from different regions (Caribbean, Africa, global perspective) agree that lack of internet access, high costs, and infrastructure limitations create major obstacles to implementing digital health solutions, particularly affecting developing countries and rural areas.


Topics

Development | Infrastructure


Need for culturally appropriate and locally integrated digital health solutions

Speakers

– Yao Amevi A. Sossou
– Houda Chihi
– J Amado Espinosa L

Arguments

Need for culturally relevant and locally integrated solutions


Collaboration needed between technical community, academia, and environmental experts


Environmental variables should be integrated into personal health records


Summary

Speakers agree that digital health solutions must be designed with community input, cultural considerations, and local context in mind, requiring multi-stakeholder collaboration to be effective.


Topics

Development | Sociocultural


Technology should complement rather than replace human empathy in healthcare

Speakers

– June Parris
– Yao Amevi A. Sossou
– Houda Chihi

Arguments

Healthcare professionals must create empathy with patients to build trust


Need for healthcare professionals to educate themselves on emerging technologies


AI should complement human care while maintaining empathy


Summary

There is consensus that while technology can enhance healthcare delivery, human empathy and compassionate care remain essential elements that must be preserved and integrated with technological solutions.


Topics

Human rights | Sociocultural


Internet enables environmental and health monitoring systems

Speakers

– Alessandro Berioni
– Henrietta Ampofo
– Houda Chihi
– Joao Rocha Gomes

Arguments

Internet enables early warning systems for climate-related health threats


Malaria modeling using climate data helps predict disease patterns and allocate resources


IoT devices and sensors can monitor environmental factors affecting health


99% of world’s population lives in areas not meeting WHO air quality guidelines


Summary

Speakers agree that internet-connected technologies provide crucial capabilities for monitoring environmental health threats and enabling early warning systems for disease prevention and resource allocation.


Topics

Development | Infrastructure


Similar viewpoints

Both speakers from the Caribbean region highlight the challenges of dependency on external resources and funding, which creates limitations in addressing local health and environmental problems comprehensively.

Speakers

– June Parris
– Jason Millar

Arguments

Caribbean countries face economic barriers to keeping up with developed nations’ health technology


External funding comes with constraints that may limit full problem resolution


Topics

Economic | Development


Both speakers express concern about the potential misuse of AI and algorithmic systems in healthcare, emphasizing risks of discrimination and the need for value-based rather than profit-driven approaches.

Speakers

– Jorn Erbguth
– Alessandro Berioni

Arguments

AI can optimize care but also enable government abuse in healthcare rationing


Algorithms are engagement-driven rather than value-driven, creating challenges


Topics

Legal and regulatory | Human rights


Both speakers emphasize the interconnected nature of health and environmental issues, supporting integrated approaches that consider climate factors in health planning and response.

Speakers

– Amali De Silva-Mitchell
– Henrietta Ampofo

Arguments

WHO mandates health as integral part of climate change issues


Malaria modeling using climate data helps predict disease patterns and allocate resources


Topics

Legal and regulatory | Development


Unexpected consensus

Trust and mistrust in healthcare systems extends to digital solutions

Speakers

– Yao Amevi A. Sossou
– June Parris

Arguments

Patients often prefer self-medication over formal healthcare due to mistrust


Lack of awareness about available digital health solutions


Explanation

It’s notable that speakers from different regions (West Africa and Caribbean) independently identified similar patterns of patient mistrust in formal healthcare systems, which then extends to digital health solutions. This suggests a broader global challenge in healthcare trust that transcends regional boundaries.


Topics

Sociocultural | Human rights


Data quality and validation concerns across different technological applications

Speakers

– Houda Chihi
– Yao Amevi A. Sossou
– Audience

Arguments

Importance of testing and validation before deploying healthcare solutions


Need for unified regulatory mechanisms for data collection validity


Need to validate data quality from IoT devices for environmental health monitoring


Explanation

Unexpected consensus emerged around the critical importance of data validation and quality assurance across different speakers with varying technical backgrounds, suggesting this is a universal concern regardless of specific technological focus.


Topics

Legal and regulatory | Development


Overall assessment

Summary

Strong consensus exists around key challenges including digital divide, need for culturally appropriate solutions, importance of human empathy in healthcare, and potential of internet-enabled monitoring systems. Speakers consistently emphasized equity, accessibility, and human-centered approaches.


Consensus level

High level of consensus on fundamental challenges and principles, with speakers from diverse geographic and professional backgrounds identifying similar barriers and solutions. This suggests these issues are universal concerns in digital health implementation, with implications for policy development requiring coordinated global and local approaches that prioritize equity, cultural sensitivity, and human-centered design.


Differences

Different viewpoints

Role of AI in healthcare empathy and patient care

Speakers

– Jorn Erbguth
– June Parris
– Yao Amevi A. Sossou
– Houda Chihi

Arguments

Studies have shown that people tend to see more empathy in AI than in human doctors


It’s up to the individual. As a healthcare professional, you should put your job first, but you should care about the patient and empathize


Healthcare professionals must create empathy with patients to build trust


AI should complement human care while maintaining empathy


Summary

Jorn suggests AI may actually provide better perceived empathy than stressed human doctors, while other speakers emphasize the irreplaceable importance of human empathy and the need for healthcare professionals to prioritize genuine patient connection.


Topics

Human rights | Sociocultural


Primary barriers to digital health adoption

Speakers

– June Parris
– Yao Amevi A. Sossou
– Alessandro Berioni

Arguments

Caribbean countries face economic barriers to keeping up with developed nations’ health technology


Patients often prefer self-medication over formal healthcare due to mistrust


Only two-thirds of global population have internet access, with 2.6 billion lacking connection


Summary

June emphasizes economic and infrastructure barriers, Yao focuses on cultural mistrust and awareness issues, while Alessandro highlights the global digital divide as the primary barrier.


Topics

Development | Infrastructure | Sociocultural


Unexpected differences

Perception of AI empathy versus human empathy in healthcare

Speakers

– Jorn Erbguth
– June Parris
– Yao Amevi A. Sossou

Arguments

Studies have shown that people tend to see more empathy in AI than in human doctors


It’s up to the individual. As a healthcare professional, you should put your job first, but you should care about the patient and empathize


Healthcare professionals must create empathy with patients to build trust


Explanation

Unexpectedly, there was disagreement about whether AI could potentially provide better empathy than human healthcare providers. This is surprising given the general consensus that human connection is irreplaceable in healthcare.


Topics

Human rights | Sociocultural


Overall assessment

Summary

The discussion showed relatively low levels of fundamental disagreement, with most speakers sharing common goals around improving digital health access and equity. The main areas of disagreement centered on prioritization of barriers (economic vs. cultural vs. infrastructure) and the role of AI in patient care.


Disagreement level

Low to moderate disagreement level. Most disagreements were about emphasis and approach rather than fundamental goals. This suggests good potential for collaborative solutions, though different regional perspectives and professional backgrounds led to different prioritization of challenges and solutions.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers from the Caribbean region highlight the challenges of dependency on external resources and funding, which creates limitations in addressing local health and environmental problems comprehensively.

Speakers

– June Parris
– Jason Millar

Arguments

Caribbean countries face economic barriers to keeping up with developed nations’ health technology


External funding comes with constraints that may limit full problem resolution


Topics

Economic | Development


Both speakers express concern about the potential misuse of AI and algorithmic systems in healthcare, emphasizing risks of discrimination and the need for value-based rather than profit-driven approaches.

Speakers

– Jorn Erbguth
– Alessandro Berioni

Arguments

AI can optimize care but also enable government abuse in healthcare rationing


Algorithms are engagement-driven rather than value-driven, creating challenges


Topics

Legal and regulatory | Human rights


Both speakers emphasize the interconnected nature of health and environmental issues, supporting integrated approaches that consider climate factors in health planning and response.

Speakers

– Amali De Silva-Mitchell
– Henrietta Ampofo

Arguments

WHO mandates health as integral part of climate change issues


Malaria modeling using climate data helps predict disease patterns and allocate resources


Topics

Legal and regulatory | Development


Takeaways

Key takeaways

Digital health technologies offer significant opportunities for environmental health monitoring and healthcare resilience, but create new vulnerabilities including cyber attacks and potential for government abuse in healthcare rationing


A major digital divide exists globally, with only two-thirds of the population having internet access, creating barriers to equitable healthcare delivery especially in developing regions


Trust and cultural barriers significantly impact adoption of digital health solutions, with patients often preferring traditional remedies due to mistrust of formal healthcare systems


Environmental health monitoring through IoT devices and AI can enable early warning systems and better resource allocation, but data validity and standardization remain critical challenges


Healthcare inequities can be perpetuated or hidden through biased data collection and AI algorithms, as demonstrated by funding disparities between similar diseases


Multi-stakeholder collaboration is essential, requiring integration across health, environment, technology, and policy sectors with youth playing a leading role


Empathy and human connection remain crucial in healthcare delivery, even as AI and technology become more prevalent in medical practice


Cybersecurity must be integrated with sustainability principles to create ‘cyber sustainability’ that protects both data and environmental health


Resolutions and action items

Participants encouraged to join the Dynamic Coalition on Data-Driven Health Technologies and access published documents on the IGF homepage


Upcoming hackathon scheduled for July 2nd on ‘Shaping the Future of Health’ with call for participation


Global call for AI-powered social innovation ideas to be launched in late July to advance public health through technology


Next World Federation of Public Health Associations conference scheduled for September 26th in Cape Town


Follow-up session planned at WSIS in two weeks for continued discussion on the topic


Session organizers committed to sharing presentation materials and resources for static availability


Unresolved issues

How to establish unified regulatory mechanisms and standards for IoT health data collection and validation across different regions


How to balance AI-driven healthcare efficiency with essential human empathy and compassionate patient care


How to address the fundamental affordability barriers that make digital health solutions inaccessible to low-income populations


How to overcome cultural resistance and build trust in digital health technologies in traditional communities


How to ensure data sovereignty and prevent misuse of health data by governments or corporations for discriminatory purposes


How to bridge the language and literacy gaps that prevent effective use of digital health platforms


How to create sustainable funding models for digital health initiatives that don’t come with restrictive constraints


Suggested compromises

Technology should complement rather than replace human healthcare providers, maintaining the essential human element while leveraging AI capabilities


Digital health solutions should be co-created with communities to ensure cultural relevance and local language integration


Flexible payment systems and insurance opportunities should be built into digital health platforms to address affordability concerns


Regional adaptation of global health guidelines and standards to account for local environmental and cultural differences


Gradual education and capacity building approach to help healthcare professionals and communities adapt to new technologies


Balance between data collection for public health benefits and privacy protection through transparent governance frameworks


Thought provoking comments

The WHO has mandated that the health matters are integral part of climate change issues. And we need to look at that from the perspective of the whole community. So, an ecosystem of services. And this will include public safety, emergency, ambulance, hospitals, doctors and citizens and so forth.

Speaker

Amali De Silva-Mitchell


Reason

This comment established the foundational framework for the entire discussion by emphasizing the interconnected nature of health, climate, and technology systems. It moved beyond siloed thinking to advocate for integrated policymaking and governance frameworks.


Impact

This opening comment set the tone for the entire session, establishing the multi-stakeholder, ecosystem approach that subsequent speakers built upon. It provided the conceptual foundation that allowed other speakers to discuss their regional challenges within this broader integrated framework.


AI can either propose optimal care or be used by governments or industry to triage healthcare according to OPAC criteria, quietly determining who receives high-cost therapies and who is excluded… these inequities will not be transparent. They will not be visible because those decisions can be made in the dark and can be hidden behind decisions that seem to be neutral.

Speaker

Jorn Erbguth


Reason

This comment introduced a critical ethical dimension by highlighting how technology can perpetuate or create new forms of discrimination while appearing neutral. The breast cancer vs. prostate cancer funding example provided concrete evidence of existing inequities that could be amplified by AI systems.


Impact

This comment shifted the discussion from purely technical benefits to ethical considerations and power dynamics. It established a critical lens that influenced how subsequent speakers addressed technology implementation, with many emphasizing the need for transparency, accountability, and equity in their presentations.


We depend very heavily on external resources, which means that any funding agency that targets us for aid will usually make us an offer, but at the same time, that also will come with terms and conditions or a set of constraining factors that will limit the actual potential to maybe fully address an issue in a way that is fully beneficial for us.

Speaker

Jason Millar


Reason

This comment revealed the complex power dynamics and dependency relationships that affect technology implementation in developing regions. It highlighted how external funding can inadvertently perpetuate problems by imposing constraints that don’t align with local needs.


Impact

This insight added a crucial geopolitical dimension to the discussion, prompting other speakers to consider not just technical solutions but also the political economy of health technology implementation. It influenced the conversation toward more nuanced discussions about local ownership and culturally appropriate solutions.


Most of the patients I interviewed during the research they didn’t trust on the solutions and even the doctors those that use the platform they confirmed that none of their patients are using the app and basically most of the barriers is deeply rooted on the lack of the human aspect of it. There is a profound lack of awareness and deeply seated mistrust in the health care system extending to it on the digital tools also.

Speaker

Yao Amevi A. Sossou


Reason

This comment provided crucial ground-truth evidence that challenged assumptions about technology adoption. It revealed that technical solutions alone are insufficient without addressing fundamental trust issues and human-centered design principles.


Impact

This comment significantly shifted the discussion from technology-focused solutions to human-centered approaches. It prompted other speakers to emphasize the importance of community engagement, cultural relevance, and trust-building in their subsequent interventions, fundamentally changing the conversation’s focus.


The algorithms are mostly based on engagement drive. So they’re engagement driven algorithms rather than value driven algorithms. And this is a key point we need to address… we claim for a renewed digital social contract to put people at planet first before profit and the engagement algorithms.

Speaker

Alessandro Berioni


Reason

This comment identified a fundamental structural problem with current technology systems – that they optimize for engagement rather than health outcomes or social good. The call for a ‘renewed digital social contract’ provided a concrete framework for addressing these issues.


Impact

This comment introduced a systems-level critique that elevated the discussion beyond individual applications to broader questions about how technology platforms are designed and governed. It influenced the conversation toward policy and governance solutions rather than just technical fixes.


Studies have shown that people tend to see more empathy in AI than in human doctors. Human doctors are often stressed under time pressure, and sometimes they don’t act with the empathy we would like them to act with.

Speaker

Jorn Erbguth


Reason

This counterintuitive observation challenged common assumptions about AI lacking human qualities. It revealed the complex reality that stressed healthcare systems may actually make human providers less empathetic than well-designed AI systems.


Impact

This comment prompted a nuanced discussion about the role of empathy in healthcare and how technology might complement rather than replace human care. It led to a more sophisticated understanding of the human-AI relationship in healthcare delivery.


Overall assessment

These key comments fundamentally shaped the discussion by moving it beyond a simple technology-benefits narrative to a complex, multi-dimensional analysis of power, equity, trust, and human-centered design. The conversation evolved from initial technical optimism through critical examination of systemic inequities, to practical insights about implementation challenges, and finally to sophisticated discussions about governance and human-AI collaboration. The most impactful comments were those that introduced evidence-based challenges to assumptions, revealed hidden power dynamics, or provided concrete examples of implementation failures. This created a more honest and actionable discussion that acknowledged both the potential and the pitfalls of leveraging internet technologies for health and environmental resilience.


Follow-up questions

How can we validate and evaluate the quality of data produced by IoT devices for environmental health monitoring?

Speaker

Marcelo Fornasin


Explanation

This is crucial for ensuring the reliability of environmental health data used in epidemiological surveillance and public health decision-making


How can we use IoT-generated environmental data to provide epidemiological surveillance in healthcare scenarios?

Speaker

Marcelo Fornasin


Explanation

Understanding the practical application of environmental IoT data in health surveillance systems is essential for effective public health responses


How can healthcare professionals balance AI-driven healthcare benefits with the need for human empathy and compassionate patient care?

Speaker

IGF Ghana representative


Explanation

This addresses the fundamental challenge of maintaining human connection in increasingly automated healthcare systems


How can we develop unified regulatory mechanisms and policies for IoT data collection across different regions?

Speaker

Yao Amevi A. Sossou


Explanation

Standardized approaches are needed to build trust in IoT-generated health and environmental data globally


How can we regionalize environmental health guidelines while maintaining global standards?

Speaker

J Amado Espinosa L


Explanation

Environmental conditions vary by region, requiring localized guidelines that still maintain scientific validity and global coherence


How can we design digital health solutions that are culturally relevant and integrate local languages?

Speaker

Yao Amevi A. Sossou


Explanation

Addressing accessibility barriers and cultural appropriateness is essential for effective adoption of digital health tools in diverse communities


How can we develop sustainable funding models for digital health initiatives in developing countries?

Speaker

June Parris and Jason Millar


Explanation

Current dependency on external funding creates unsustainable cycles, particularly when natural disasters repeatedly set back progress


How can we address the engagement-driven versus value-driven algorithms challenge in health applications?

Speaker

Alessandro Berioni


Explanation

Current algorithms prioritize engagement over health outcomes, which could lead to harmful health recommendations


How can we establish transparent and accountable AI governance frameworks for health applications?

Speaker

Alessandro Berioni and Houda Chihi


Explanation

The black-box nature of neural networks makes it difficult to track how health-related decisions are made, raising concerns about accountability


How can we replicate successful digital health projects and learn from failed ones to improve policy interventions?

Speaker

Joao Rocha Gomes


Explanation

Many digital health projects don’t continue beyond their funding period, suggesting a need to better understand success factors and sustainability models


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #8 Modern Warfare Timeless Emblems

Open Forum #8 Modern Warfare Timeless Emblems

Session at a glance

Summary

This discussion from the Internet Governance Forum in Norway focused on the Digital Emblem Initiative, which aims to create a universally-recognized symbol for protecting digital infrastructure during armed conflicts. The session featured Samit D’Chuna, legal adviser at the International Committee of the Red Cross (ICRC), and Chelsea Smethurst, Director for Digital Diplomacy at Microsoft, moderated by Tejas Bharadwaj from the Carnegie Endowment for International Peace.


D’Chuna explained the historical foundation of the Red Cross emblem, tracing its origins to Henri Dunant’s experience at the 1859 Battle of Solferino, which led to the first Geneva Convention in 1864. He emphasized that the physical emblem has been largely successful over 160 years, with violations making headlines precisely because they are exceptional rather than routine. The digital emblem project emerged from the recognition that modern conflicts increasingly involve cyber operations, and medical services and humanitarian organizations now depend heavily on digital infrastructure that currently lacks any form of protection identification.


The technical requirements for the digital emblem include being decentralized, allowing covert inspection without alerting adversaries, and being removable based on security assessments. Three technical approaches are being considered: protected entity flags, digital certificates, and metadata labels. Key challenges include preventing misuse, ensuring accessibility for organizations in developing countries, and achieving global standardization through the Internet Engineering Task Force.


Smethurst highlighted the importance of industry adoption, noting that the Cybersecurity Tech Accords’ 160+ member companies represent over a billion customers globally. Both speakers emphasized that success requires technical standardization, legal integration into international humanitarian law, and widespread multi-stakeholder adoption. The initiative represents a necessary adaptation of timeless humanitarian principles to the realities of modern digital warfare, requiring both technical innovation and diplomatic consensus to protect vulnerable populations who increasingly depend on digital connectivity during conflicts.


Keypoints

## Major Discussion Points:


– **Digital Emblem Initiative Overview**: The creation of a universally-recognized digital symbol to protect digital infrastructure during armed conflicts, extending the traditional Red Cross/Red Crescent emblem concept into cyberspace. This initiative aims to identify and protect medical services and humanitarian operations that now depend heavily on digital infrastructure.


– **Technical Implementation Challenges**: Discussion of three main technical approaches (protected entity flags, digital certificates, and metadata labels) and key requirements including decentralization, covert inspection capabilities, and removability. The challenge lies in making the system secure enough to prevent misuse while simple enough for humanitarian organizations in developing countries to implement.


– **Legal Integration and Global Adoption**: The need for diplomatic efforts to integrate the digital emblem into international humanitarian law through various mechanisms (amending existing protocols, creating new protocols, or unilateral declarations) and ensuring adoption by all 196 states party to the Geneva Conventions.


– **Trust and Effectiveness Concerns**: Addressing skepticism about whether a digital emblem will be respected given current violations of physical emblems in conflicts. The speakers emphasized that the vast majority of emblem protections work invisibly and successfully, with violations being the exception that receives media attention.


– **Multi-stakeholder Collaboration**: The importance of bringing together governments, tech companies (like the 160+ members of the Cybersecurity Tech Accords), humanitarian organizations, and international bodies through forums like the Internet Engineering Task Force (IETF) to develop and implement the standard.


## Overall Purpose:


The discussion aimed to introduce and explain the Digital Emblem Initiative, which seeks to extend traditional humanitarian protections into the digital realm. The session was designed to educate participants about the project’s technical, legal, and diplomatic aspects while addressing concerns about implementation and effectiveness.


## Overall Tone:


The discussion maintained a professional, educational tone throughout, with speakers presenting complex legal and technical concepts in accessible terms. The tone was optimistic about the project’s potential while acknowledging realistic challenges. During the Q&A session, the tone became more conversational and defensive when addressing skeptical questions about the emblem’s effectiveness given current conflict violations, but remained respectful and informative. The speakers demonstrated expertise while showing openness to collaboration and feedback from the international community.


Speakers

– **Tejas Bharadwaj**: Senior Research Analyst at Carnegie Endowment for International Peace, India; Session moderator


– **Samit D’Chuna**: Legal adviser at the International Committee of the Red Cross (ICRC); Legal and policy lead for the Digital Emblem Project


– **Chelsea Smethurst**: Director for Digital Diplomacy at Microsoft; Technical lead working on the Digital Emblem Initiative


– **Audience**: Multiple audience members who asked questions during the Q&A session


**Additional speakers:**


– **Jure Bokovoy**: Finnish Green Party member (audience member who asked a question)


– **Mia Kuhlewin**: Works in the Internet Engineering Task Force on transport protocols (audience member who asked a question)


Full session report

# Digital Emblem Initiative: Extending Humanitarian Protection into Cyberspace


## Discussion Summary from the Internet Governance Forum, Norway


### Introduction and Context


This session at the Internet Governance Forum in Norway examined the Digital Emblem Initiative, a project aimed at creating digital symbols to protect humanitarian and medical infrastructure during armed conflicts. The session was moderated by **Tejas Bharadwaj**, Senior Research Analyst at Carnegie Endowment for International Peace, India, and featured **Samit D’Chuna**, Legal Adviser at the International Committee of the Red Cross (ICRC) and Legal and Policy Lead for the Digital Emblem Project, and **Chelsea Smethurst**, Director for Digital Diplomacy at Microsoft and Technical Lead working on the Digital Emblem Initiative.


The session followed a structured format with a 20-minute keynote, presentations from both speakers, a 35-minute panel discussion, and a 15-minute Q&A period that included questions from online participants.


### Historical Foundation and Legal Framework


**Samit D’Chuna** established the historical context, noting his role as “legal advisor, not a technical person” while clarifying that the project does have technical leadership. He traced the protective emblem system to Henri Dunant, a businessman who witnessed the 1859 Battle of Solferino and was moved to organize care for wounded soldiers regardless of which side they fought for. This led to the first Geneva Convention in 1864 and the creation of the Red Cross emblem.


D’Chuna explained that the emblem functions “like a stop sign” under international humanitarian law to identify protected persons and objects during armed conflicts. The system has been largely successful over 160 years, with violations making headlines precisely because they are exceptional rather than routine.


### The Digital Challenge


The speakers outlined how modern conflicts increasingly involve cyber operations targeting digital infrastructure. Medical services and humanitarian organizations now depend heavily on digital systems, yet these critical digital assets currently lack protection identification under international humanitarian law.


D’Chuna explained that the digital emblem project emerged from recognizing this gap: the need to protect digital infrastructure used by medical and humanitarian services during conflicts.


### Technical Implementation Approaches


**Chelsea Smethurst** outlined three primary technical approaches being considered:


1. **Protected Entity Flags**: Identifiers attached to website addresses, similar to physical emblems on buildings


2. **Digital Certificates**: Cryptographic verification of protected status, described as “passports for websites”


3. **Metadata Labels**: Embedded within digital files to provide protection that travels with the data


D’Chuna specified three critical technical requirements: the system must be decentralized (no central authority controls usage), support covert inspection (can be checked without alerting the protected entity), and be removable based on security analysis.


### Security and Accessibility Challenges


Smethurst identified three main technical challenges: ensuring security to prevent misuse, maintaining simplicity for developing countries to implement, and achieving standardization across different systems.


A central concern is that marking humanitarian infrastructure might actually increase exposure to malicious actors. The system must allow organizations to remove emblems if security analysis shows risks outweigh benefits.


### Legal Integration and Diplomatic Progress


The project requires integration into international humanitarian law through various mechanisms including amending existing protocols or creating new ones. The goal is adoption by all 196 states party to the Geneva Conventions.


Significant progress has been achieved: the 34th International Conference of Red Cross and Red Crescent (held last October) adopted a consensus resolution encouraging digital emblem work. Additionally, the Cybersecurity Tech Accords, representing 150-160 technology companies globally, adopted a digital emblem pledge.


### Multi-Stakeholder Collaboration


The technical standardization process will occur through the Internet Engineering Task Force (IETF), with a working group meeting scheduled for July to develop technical standards. The Australian Red Cross will lead work with national societies to integrate the digital emblem into domestic legal systems.


The project extends beyond the Red Cross to include other protective emblems: three orange circles for dangerous forces, civil defense emblems, and Blue Shield/UNESCO cultural property symbols.


### Addressing Effectiveness Concerns


**Tejas Bharadwaj** posed a fundamental challenge: given that physical emblems are sometimes violated in current conflicts, why should anyone believe a digital emblem will be more effective?


**Samit D’Chuna** reframed this concern: “The vast majority of the time the emblem is respected… what we see in the news are violations… it’s important to remember that the vast majority of the time the emblem does in fact work.” He referenced the “Roots of Restraint” study showing that people in conflict-affected areas report international humanitarian law works effectively despite violations receiving disproportionate media attention.


D’Chuna emphasized that international humanitarian law compliance relies on “training, bilateral dialogue, and moral obligation, not just punishment,” noting that the ICRC engages in confidential dialogue with both state and non-state actors, including cyber groups.


### Audience Questions and Concerns


**Jure Bokovoy**, a Finnish Green Party member, questioned trust in the emblem system given recent violations by major Geneva Convention signatories without significant international law enforcement.


**Mia Kuhlewin**, who works in IETF on transport protocols, raised questions about the digital emblem’s relationship to broader cybersecurity protection measures, highlighting the need for clarification on whether these should be integrated or separate initiatives.


Other audience concerns included the role of platform companies in conflict narrative shaping and questions about algorithmic amplification issues, though these topics were only briefly addressed.


### Technical Philosophy


**Chelsea Smethurst** emphasized a key principle: “We’re not driving this as a cyber security initiative, rather it is how do we develop security controls to support the legal requirements.” This ensures technical solutions serve humanitarian law requirements rather than driving them.


### Implementation Timeline and Next Steps


Concrete next steps include:


– IETF working group launching in July for technical standards development


– Continued annual ICRC meetings with states for legal integration


– Australian Red Cross leading domestic integration work


– Ongoing engagement with technology companies beyond current supporters


The ICRC has also published “Eight Rules for Hackers” as part of broader engagement with digital actors.


### Conclusion


The Digital Emblem Initiative represents an attempt to adapt humanitarian principles to digital warfare realities. While the project benefits from diplomatic momentum, industry support, and technical expertise, it faces challenges including technical complexity, global accessibility requirements, and questions about symbolic protection effectiveness in contemporary conflicts.


The discussion revealed broad consensus on the need for digital humanitarian protection and the multi-stakeholder approach required, even as significant implementation challenges remain. Success will depend on building the same trust system that has made physical emblems largely successful while adapting to unique digital domain characteristics.


Session transcript

Tejas Bharadwaj: I think I’ll start again. So good morning and welcome to all the wonderful participants I’ve gathered today. This is day one of the Internet Governance Forum from Norway. A session today titled Modern Warfare, Timeless Emblems, will uncover the progress as well as the prospects of the Digital Emblem Initiative that aims to create a universally-recognized symbol for protecting digital infrastructure during conflicts. We have two wonderful speakers here today to discuss this topic, Samit D’Chuna, the legal adviser of the International Committee on Red Cross, and Chelsea Smethurst, the Director for Digital Diplomacy at Microsoft. I’ll introduce myself, I’m Tejas Bharadwaj, Senior Research Analyst at Carnegie Endowment for International Peace, India, and I’ll be moderating this interesting session. A quick note for our participants on the session’s format and some housekeeping rules. The session will start with a 20-minute keynote by Samit, who will offer you the nitty-gritty about the Digital Emblem Project. This will be followed by a series of presentations by Samit, Chelsea, and I, and then we’ll have a Q&A session followed by a 35-minute moderated panel discussion where Samit, Chelsea, and I will explore different aspects of the Digital Emblem Initiative, covering its concepts, the aspects of inclusivity and scale, challenges involving its implementation, its associated risks, and also what to look ahead. Finally, in the end, we’ll open the floor for some questions for about 15 minutes. For the online participants streaming in, add your questions in the chat box. Please start, yeah.


Samit D’Chuna: Tejas, thank you so much for that wonderful introduction. Good morning, everyone. Thank you to the IGF for hosting us for this very important topic, and thank you to all of you. I know there’s some really interesting workshops going on at the same time, so thank you so much for making the time for this one. As Tejas mentioned, my name is Samit Dukuna. I am a legal advisor at the International Committee of the Red Cross, or the ICRC. For those of you that don’t know, the International Committee of the Red Cross, or the ICRC, is the organization mandated by international… and I’m here to talk about the ICRC’s mandate in international law to protect and assist victims of armed conflict and other situations of violence. So through our mandate in international law, the ICRC engages in a host of different activities. We visit persons that are deprived of liberty, persons that are detained, reunite family members that are separated in armed conflict. We contribute to the respect for and development of international humanitarian law, which is really a big part of the ICRC’s work that I’ll talk to you a bit more about today. We, of course, support the medical services in their work and crucially, we engage confidentially and bilaterally with parties to armed conflict when such situations are taking place. So states, when there are parties to armed conflict and also non-state parties, you know, what you might refer to as armed groups, are also a key interlocutor for the ICRC. And so in that role, we’re often referred to as the guardians of international humanitarian law or the guardians of the law of war. And it’s in that position that the ICRC has a role to play in protecting and assisting victims of armed conflict and other situations of violence and other situations of violence that I’ll talk to you a bit more about today. So, I think that’s a really important point that we need to make, and that the ICRC is sort of well-positioned to say, along with a growing number of states and other stakeholders, that today, digital technologies are really shaping the contours of modern conflict. We are very much witnessing a profound shift in the environments where international humanitarian law must operate, and as a result, we do have to think about how international law must adapt to some of these profound changes. And the digital emblem is, the digital emblem project is sort of a small project, but it’s a very large project. And I think that our new digital emblem project is going to be a key part of that. It’s a necessary adaptation, as I hope you’ll see by the end of this workshop, to basically the modern nuances of armed conflict. So what is the digital emblem project? Well, if we start with the Red Cross and the Red Crescent emblem, they have of course long marked the protection of physical persons and objects. And I guess the question then is today, what does that mean in a reality where cyber operations are a key part of armed conflict and digital infrastructure is a key part of that? The key part of the work undertaken by the medical services and humanitarian organizations. But I’m getting a little bit ahead of myself now because I’m already kind of talking about the physical emblem and the digital emblem and modern conflict. I want to take a sort of a step back so that everyone’s kind of on the same page and understanding of what exactly we’re talking about when we talk about an emblem. And we’ll really go to our title for this and try to understand this concept of a timeless emblem. What is a timeless emblem? And that story starts a little bit more than 160 years ago in a city where I live called Geneva, right? Geneva in Switzerland, where a Swiss businessman named Henri Dunant. He had these great business ideas and he was having a bit of an issue with one of his business projects. And to deal with that issue, he was able to organize a meeting with the king of France. He was that influential that he was able to meet the king of France to sort of iron out some of these issues he was having with his business. The problem was that the king of France was not in France. At the time, he was actually in northern Italy with his army because he was fighting, you know, his army and with him at the lead. We’re fighting in something that’s now known as the Second War of Italian Independence, fighting against the Austro-Hungarian Empire. And so Henri Dunant, he’s a businessman, he’s savvy, he’s persevering and stubborn. And he says, no problem. This is a pressing issue. I’m just going to pack my bags and I’m going to go to northern Italy and I’m going to meet the French king there. And so he makes his way to northern Italy and he arrives near a village called Solferino. He actually arrives the day after a horrific battle takes place. And if you put yourself in the shoes of sort of a, you know, a 19th century European, your image of what warfare is, is actually something quite sort of honorable and almost beautiful in a way, right? Like you imagine sort of the honor of the armed forces and the great things that they were doing to protect the state, to protect the nation. And when Henri Dunant arrives on, you know, the aftermath of this horrific battle. Well, he doesn’t actually see any honor. He doesn’t see any beauty. What he sees is carnage, right? So he sees wounded soldiers. He sees sick soldiers. He sees dead soldiers and he sort of what’s left of the medical services of the armed forces really completely overwhelmed by the carnage and the destruction on the, you know, on the battlefield near Solferino. And Alhidruna is completely moved by what he sees and he decides forget about these business ideas. There’s no need to meet with the French King about business. There’s something more important happening right now. And Alhidruna goes to a nearby village called Castiglione and he mobilizes the local population in Castiglione particularly nurses and women and he kind of says to them, you know, there are people in need here. Some of them are French. Some of them are Italian. Some of them are Austro-Hungarian and none of that actually matters because when you’re wounded or when you’re sick, you’re what we now call in French, hors de combat. We are outside of combat and you’re just a person in need and these people need help. And so he mobilizes the population to go to the battlefield and provide assistance to these persons that are wounded and sick. Eventually, he does meet with the French King, decides not to talk about his business ideas at all. And instead what he does is he convinces the French King to release some of the doctors of the Austro-Hungarian Empire that of the army of the Austro-Hungarian Empire that had actually been detained and he convinces them to release those doctors to provide even more assistance to the wounded and the sick. So just a complete paradigm shift for Henri Dunant that really reflected throughout the rest of his life because he returns back to Geneva and he writes a book and he writes a book and it’s called Un Souvenir de Solferino. So a memory of Solferino and in his book, he talks about the suffering that he saw on the battlefield and he basically proposes two sort of key paths forward. The first one is to say that in times of peace. The civilized world, as he called it, needs to set up organizations that have, as their profession, the ability to provide assistance and protection to the wounded and the sick in armed conflict. Because that is not a role that we can entrust solely to the armies of the adversaries. There has to be some sort of neutral and impartial assistance that’s provided on the battlefield and more broadly in situations of armed conflict. And that’s sort of the precursor of the International Committee of the Red Cross, the International Federation of the Red Cross, and the 191 what we call national societies, independent organizations of Red Cross and Red Crescent societies all over the world. So the Norwegian Red Cross or the French Red Cross or the Turkish or Syrian Red Crescent, those are all independent components, independent organizations that are components of the Red Cross and Red Crescent movement. And that was that first idea of Samit D’Chuna. And then the second idea that’s even more crucial to our discussion today was that, you know, he was saying, if we’re going to create all of these different organizations, there has to be a way to make sure that they’re protected on the battlefield. They have to be respected on the battlefield. So we have to make sure that there’s rules in place where parties to conflict. Yeah, of course, they protect the wounded and the sick. They don’t target the wounded and the sick, but they also have to protect the medical services and eventually humanitarian operations as well. And those are two really key words, respect and protect. And there’s a reason that that language is used. And that idea eventually led to the adoption in 1864 of the very first Geneva Convention. The very first Geneva Convention was essentially adopted in the run up to the writing of this book by Samit D’Chuna, the sort of founder of the International Committee of the Red Cross. And so why do I sort of focus and highlight this concept of respect and protect? Well, the idea of not targeting civilians kind of already existed at the time. There was the Liber Code. There was lots of different states that had in their military manuals, you know, civilians should be spared in armed conflict. And that’s actually not the only thing we’re talking about. It’s, we need to make sure because we are part of the reason that carnage is taking place. We need to make sure that the medical services still function. So when people are wounded and when people are sick and when people are in situations of vulnerability, that there is a system in place to protect them. We can’t just eight flout and ignore that, you know, that system. So that, that concept of respect and protect was really essential. And now to be able to respect and protect certain persons and objects, obviously it’s not just about identifying civilians or identifying who’s a combatant. You have to identify this kind of invisible protection. So it was very obvious even before the adoption of that first Geneva Convention, that there had to be, there had to be a way to identify, there had to be a way to identify those specific protections, you know, in complex environments. And that’s really what led to the adoption of the, the emblem, what we call the distinctive emblem of the Geneva Conventions or the Red Cross emblem, the Red Crescent emblem and eventually also the Red Crystal emblem. The purpose of the emblem is to identify a specific protection. The way I like to explain it sometimes is it’s a bit like a stop sign, or it’s a bit like a symbol of highway safety. Because if you have intersections and there’s a law, there’s a rule that says cars have to stop at that intersection, there has to be a way to tell that car that there is this rule. Because cars don’t just, well, I guess now with AI, maybe cars will know, but let’s say before that there had to be a way to say, to tell a driver, hey, you need to stop at this intersection. And the emblem is a little bit like that. It identifies to parties to conflict that this is a specifically protected person or a specifically protected object, and they have to be respected and protected. So it’s not just a question of not targeting them. It’s a question of ensuring that they’re able to undertake their work, despite the fact that a conflict is ongoing. And where does that bring us sort of in the modern world? Well, today, cyber operations have become a reality of armed conflict. And it’s not the first time that the reality kind of changes for the emblem. I mean, when the emblem was created, it was created as an armlet for the medical services. Eventually, it was expanded to ambulances. Then it found its way onto hospital ships or the top of hospitals or on planes. In the 1970s, something called the distinctive signal was created, specific radio and light signals for ships and planes. Because as the medical and humanitarian services expanded into new spaces, there had to be a way to identify those services in new spaces. So cyber operations today are a reality of armed conflict, but perhaps more importantly than that, people depend on digital infrastructure. You know, regular humans depend on digital infrastructure. The medical services then correspondingly also depend on digital infrastructure, and so do humanitarian operations. And when I say depend on digital infrastructure, I mean, of course, there’s an incredible socioeconomic value in, you know, information and communication technologies. But I’m specifically talking here about the most vulnerable, right? People who don’t have the privilege to sit and talk in Lillestrøm, Norway, right? I mean, in the earlier part of my career, I worked directly in situations of armed conflict on the field with persons that were displaced, with persons that had suffered horrific violations of international humanitarian law. And I can tell you that a lot of the time and a lot of the context that I work in, one of the first things people would ask for was not food or shelter or a bed or even water. The first thing people would ask for was connectivity. The first thing people would want was the ability to call their family members or to have some way to tell their family members, hey, I’m okay. Or they wanted to be able to know that their family members were okay. I mean, you can imagine, you know, a situation where suddenly a territory becomes occupied and connectivity is shut down and people outside of that territory have no way to know what’s happening to their families. Are they okay? Did they have to move? You know, are their houses destroyed? You know, all of those things, that connection is brought together with connectivity. So connectivity has become incredibly important for people. And then it’s also become incredibly important for the medical services and humanitarian operations. And the Digital Emblem Project is not about sort of stopping attacks against that, that as with the physical world, emblems are used and people are, you know, the medical services are unfortunately still killed. We have colleagues that are killed every single year, recently this year as well, several instances where our colleagues have been killed despite displaying the emblem. So, you know, it doesn’t stop intentional attacks like that. What it does is it identifies this specific protection because if you don’t wanna stop at a stop sign, you won’t stop. But the reality is, and it’s true for the emblem as well, the vast majority of people do stop at stop signs. And the vast majority of time, even though that’s not what we hear about and we can talk about that a little bit more later, the vast majority of time, the emblem does work. The problem is that people are, you know, in digital infrastructure, there is no way today to identify, well, what is actually protected. So the idea with the digital emblem is not to replace the physical emblem. The physical emblem exists. And as I’ve just said, it works. And we can certainly talk a little bit more about that and the nuances of that, but the physical emblem works. There is no desire to have a digital emblem that identifies what’s physical, because that already exists. If new technologies of warfare are developed, they have to be developed in a way that they can continue to respect the physical emblem. And there’s not going to be a new emblem that’s created to cater to new technologies of warfare. That’s not at all what the project is about. Rather, the project is about accepting the fact that digital infrastructure has become a key part of our work. It’s become a key part of the work of the medical services. And so there has to be a way to identify that digital infrastructure. That doesn’t exist yet. There’s no way to identify that digital infrastructure today. So that’s really sort of the key drive for this project. Now, after significant consultations with states, with the private sector, and within the Red Cross and Red Crescent movement, in spring of last year, and with really the great help of Microsoft and Chelsea, who you’ll hear from really shortly, we brought the digital emblem project to the Internet Engineering Task Force, which surely a lot of you already know, where the work on standards on a digital emblem will begin very soon. So our working group has been established. The very first working group meeting will be in July of this year, so in a few weeks. And of course, the ICRC is actively engaged in those discussions. I’ll just say a few words on some of the technical requirements without going over time. So just a couple more minutes on what we look for in terms of, you know, what are the needs for a digital emblem? And I’ll just preface this by saying, you know, I mentioned at the beginning, I’m a legal advisor. I’m not a technical person. Don’t worry, we do have a technical lead on the project. I’m the legal and policy lead. So I’ll really talk about this in sort of non-technical terms, but surely Chelsea can develop on this a little bit. So really when, you know, through those consultations that we’ve had with really a broad range of stakeholders, what we’ve identified is that the digital emblem should reflect as closely as possible the way the physical emblem works. And what do I mean by that? Well, first of all, the digital emblem needs to be decentralized. So with the physical emblem, all parties to conflict can use the emblem and they don’t have to ask for permission. So if a state has identified within its own structure, the medical services or a medical unit or a medical transport, if it wants to, it applies the emblem then to that unit or that structure or that object. And it doesn’t seek permission. And that’s also true for non-state parties to conflict. If they have medical services, they can of course also use the emblem. They don’t seek permission from anyone. There’s no centralised body that determines, yes, you can use the m emblem or you can’t use the emblem. That is not the role of the ICRC. We use the emblem. But for our own infrastructure, we don’t police anyone else using the emblem. And so, that’s also going to be true. That has to be true with the digital emblem. That it can’t be a sort of centralised body that says, yes, you can use the emblem here and you can’t use it here. And we’ve determined that this is protected and this is not. That’s for parties to conflict to determine. And then after, you know, there are rules on misuse and there are obligations to suppress misuse and potentially misuse, you know, certain misuses of the emblem might be a war crime. So there’s, you know, there’s different structures in place if the emblem is misused. But ultimately, it is decentralised in its use. Next is something that we call covert inspection. It’s not my favourite term because it sounds a lot more complicated than what it really is. The idea is, at least for me, the idea is that, you know, if you have, for example, a physical emblem on the roof of a hospital and you have a reconnaissance mission by an adversary, by a party to a conflict that wants to, you know, verify certain targets and so, and it spots the emblem on the roof of a building, then it knows that this building is protected by international humanitarian law as a specific protection under IHL, so it can attack the building. It also can destroy access to that building. That’s part of that notion of respect and protect. It’s not just about not targeting that thing. It’s about making sure that that thing continues to function despite your military operations. However, it doesn’t inform the adversary that, oh, someone has actually looked at the emblem, right? It doesn’t, because sometimes it might be the medical services of the armed forces. So, an adversary would not want to tell, you know, the enemy, let’s say, ah, yes, we are checking on whether you have an emblem or not, because that might then alert the adversary that an attack is incoming. And so basically the digital emblem needs to function the same way. It needs to be, it can’t tell the adversary that it’s being looked at. That’s the notion of covert inspection. It also has to be removable. So one key thing about the distinctive emblem, the physical emblem, that also has to be true with the digital emblem, is that it has to be something, a tool that you can place and remove based on your own security analysis of what’s useful. There are very rare, and it really is the really, really exceptional circumstances, but there are situations where the ICRC also removes the, you know, doesn’t use the emblem. And that’s also true for the medical services. There are situations where owing to the security, you know, the emblem is not used. And so I’ll just quickly wrap up and then we can explore some of this in broader discussion. But, you know, the digital emblem project is really a multilateral process. It’s seen a lot of success so far in terms of bringing together a lot of stakeholders. At the 34th International Conference of the Red Cross and the Red Crescent that took place last October. So this takes place once every four years, sort of like the Olympics of international humanitarian law. At this international conference that brings together all the states that are party to the Geneva Convention, a resolution was adopted by consensus to, you know, imagine the geopolitical context we’re in today. But a resolution was adopted encouraging the work on the digital emblem and continued work by the ICRC. So that was really helpful. A few weeks after that, the Cybersecurity Tech Accords adopted a digital emblem pledge. The Tech Accords, Chelsea will correct me if I’m wrong, is about 150 or 160 companies among the biggest tech companies in the world. So that was, you know, a really great step forward for us. And now we’re really continuing on the standardization process with the technical standardization of the emblem. And we are also working directly with states on, you know, what we call legal integration or formalization into both domestic and international humanitarian law. Of course, like the distinctive emblem, you know, this technical solution has to be created, but it also has to be integrated into international law. So that’s a big part of our work there. I’ll stop there. I hope that was a good introduction and pass it back over to you, Tejas.


Tejas Bharadwaj: Thanks for this brilliant presentation, Samir. It was really really comprehensive and also kind of answered most of the questions I look forward to, you know, asking But I also have this first question for you, Samit The Red Cross emblem, one of the most universally recognized symbols of protection, is kind of routinely ignored in today’s conflict Why should anyone believe that a digital emblem will fare any better? Is it simply just another idealistic gesture and a world where violations, not protections, dominate the headlines?


Samit D’Chuna: Yeah, yeah, that’s a really good question, Tejas, and I’m glad we sort of addressed that already at the beginning It’s true, and I mentioned it earlier, you know, there are today intentional attacks against the medical services. So against hospitals, against, like I said, colleagues members of the Red Cross and Red Crescent movement have been injured and killed and those are part of, you know, directly targeted operations by parties to armed conflict The distinctive emblem doesn’t make someone a good person And violations do take place. Now, the interesting thing is that what we see in the news are violations of international humanitarian law So when a hospital is attacked or when an ambulance is attacked that shows up in our feed, on our social media You know on the on traditional news and that’s a good thing It’s a good thing that we see that and it’s a good thing that we are irate when something like that happens But it’s important to remember that the vast majority of the time the emblem is respected Okay, so that that is certainly the experience, you know, that’s my personal experience That’s the experience of our colleagues That’s the experience of the last 160 years that the emblem does in fact work the vast majority of the time and when the emblem is not respected and it’s, you know, targeted, we hear about it and that is a violation of international humanitarian law This is a war crime. Directly targeting the medical services or a humanitarian operation is a war crime And so it’s good that that’s heard about but that should not take away from this incredible success story of the distinctive emblem because it was able to make tangible this


Chelsea Smethurst: and 20 global providers, that’s over a billion customers and citizens around the world that could be protected by these entities. So that’s sort of what I see, at least on the Microsoft perspective, is really the next step to scale this project beyond just a couple of core companies, a couple of core non-profits and a couple of international organizations around the globe.


Tejas Bharadwaj: No, that’s interesting. So we definitely need tech companies to be involved in this. Samit, from a legal and diplomatic standpoint, you do need commitments from the governments here, right? So what is ICRC looking to do and how can we kind of make this legally kind of binding kind of initiative? Is there progress there?


Samit D’Chuna: Yeah, you’re absolutely right. And I think you hit the nail on the head, Chelsea, when you said sort of global adoption. That’s also true in the diplomatic world as well. I mean, one of the key things for us is making sure that the emblem, in addition to being technically robust, is something that’s adopted by all states that are party to the Geneva Convention. So we’re talking about 196 states. That would sort of be the ideal. That’s what we’re going to work towards. Because, and I’ve already kind of hinted at it earlier, there are issues related to misuse of the emblem, to who can use the emblem, to how it’s used, that simply have to be integrated into international law because there need to be these common understandings of what the digital emblem is and how it’s respected and what happens when it’s not respected. That system needs to be in place and that’s going to be in place through adoption under international humanitarian law. So there’s various different strategies or means of incorporation into IHL that we’ve been discussing with states. We have an annual meeting with states to sort of update them on the technical development and then also move this conversation forward on integration into international law. So one possible solution is amending the annex. So there’s a technical annex of additional protocol one. So I didn’t talk about this, but there’s four Geneva Conventions and then three additional protocols to the conventions. The first additional protocol. has a technical annex already and that annex can be modified and so one solution is to modify the annex. Not all states are parties, so all states are parties to the Geneva Conventions but not all states are parties to the additional protocol so there need to be some sort of also subsidiary means of ensuring that states that are not party to additional protocol one can then still be included in this process but that’s one solution. Another solution would be to have a new protocol so the latest protocol, that third protocol of the Geneva Conventions was adopted in 2005. It created the red crystal emblem which is also a distinctive emblem now of the Geneva Conventions which is why I mentioned the red cross, the red crescent and the red crystal. So that another solution is to have a fourth protocol so an entirely new diplomatic process specifically on the digital emblem and then there’s also other possible solutions sort of more ad hoc solutions like what we call a unilateral declarations or others to ensure that states do make the digital emblem part of their sort of international legal obligations. Then there’s also you know I’ve talked about international law but it also has to be integrated into domestic systems so you know the Geneva Conventions are also integrated into domestic law and all the states that are party to Geneva Conventions and that work is also you know a lot of that work is assisted by national societies and so it was of course from the beginning important that national societies be on board with the project. There’s the Australian Red Cross that’s taking the lead on sort of working with the different national societies all over the world to be sure that they’re mobilized that once a technical solution is ready that this solution can also be integrated into domestic law because that’s not an expertise that comes from Geneva or elsewhere. I mean that’s an expertise that comes from each individual country and that’s kind of the work of of the national societies to integrate.


Tejas Bharadwaj: So right so you need technical protocols as well as legal protocols to make this possible. Chelsea how are you looking to kind of embed this digital emblem into products and digital infrastructure of countries?


Chelsea Smethurst: Yeah so if we think about the digital emblem as a digital , and we are talking about, instead of being painted on a hospital roof, it’s embedded in the internet infrastructure, let’s say like a hospital’s network, so that we know that it should be protected during armed conflict. To make this work, we really need a way to in sort of layman’s terms mark these systems online, right? And so there’s three technical options that are currently on the table right now. One is what we call a protected entity, right? So this is a protected entity, right? So you can have a flag on your website’s address, this is a protected entity or system during conflict. The second way we’re thinking about doing this as a community, not just Microsoft, right, is digital certificates. So think of these as like passports for websites. So this sort of certifies a certain identification that says, hey, this is a protected entity and provides a certain level of validity for that work. And then the third sort of way we’re considering as a group to do this, so these are essentially like labels that are behind the scenes on digital files, right, that can really sort of be flexible. You can apply certain sort of parameters to these things. And so these are sort of the three, I would say, technical solutions that are on the table to date. And then I think when we think about to answer your question on so what’s the challenges around these technical solutions that we’re considering as an industry, one, you know, is it secure enough to prevent misuse? So is somebody pretending to be a protected entity or not, right? This is a very real risk, technical and legal, that we need to really consider as we think about these technical solutions being deployed. I think the second thing I’ll say in terms of challenges is, and maybe this is even more important, right, is it simple enough for humanitarian organizations in developing countries to use? We really need to think about the lowest common denominator in this. And if it’s got to require a ton of money and a lot of technical resources, we’re not really achieving our goal, right, as what we’re trying to move for for the digital environment. So I think that’s the second sort of technical slash. legal and sort of civil society risk. And then I think third, and this is very true to somebody like myself, who has been very involved in sort of technical projects and policy for cyber for many years, is how do you standardize it, right, so that everyone from governments to tech companies to NGOs or non-profits can both identify it, deploy it, and respect it. And so those are challenges we need to overcome with. Three sort of challenges that are both technical, legal, and sort of civil in terms of like what


Tejas Bharadwaj: Right. Samit, do you have any comments on this?


Samit D’Chuna: Yeah, no, that was a good point about, you know, the lowest common denominator. You know, the interesting thing about the physical emblem is that, you know, there’s a lot of discussion among people who are very passionate about IHL and particularly the emblem about how, you know, where did the idea of having a red emblem come about. If you read the Geneva Convention, Article 38 of the first Geneva Convention essentially says an ode to Switzerland. So it’s basically an inversion of the flag. But there are some pretty important names that have done quite a bit of research on this and say that the reason the color red was chosen was actually because if you’re a wounded soldier, if you’re a war medic, then you actually always have the access to the color red and you always have access to white because soldiers usually carry the flag of surrender, which at the time was already a white flag. So you have a white flag and you have ability to make a red cross. And the idea was that everyone should be able to use the emblem and there shouldn’t be any sort of barriers to the creation of the emblem. So this is something that’s too complex or even that uses colors because we’re thinking about the 1800s colors that are sort of too nuanced or too complex, then it wouldn’t be respected. And so that’s why there’s this like bright red color. Again, there’s different stories about how it came up, but that’s quite a popular one. So, yeah, I think that you’ve raised some really important points that really reflect some of the thinking that was already there in the 1800s about what the emblem needs to be.


Tejas Bharadwaj: Right. So this kind of segues into an important question I wanted to ask. I mean, we don’t want the digital emblem initiative to be an initiative that’s kind of used by a few countries, right? We definitely want to scale it up. So what are the costs associated with its implementation, especially for developing in smaller countries? Is ICRC and the tech companies actively working on that? Chelsea, if you want to go ahead.


Chelsea Smethurst: Yeah, so I think there’s probably two maybe primary risks that we would associate with implementation challenges and hurdles there. I think one is how do you sort of minimize the increased exposure to protected entities? So if you are marking medical and humanitarian digital infrastructure, could you inadvertently make them more exposed to malicious actors? And I think Samit sort of talked about this in the introduction. And this is actually a question that I have personally grappled with on this project, working with us for like the last year and a half is so what do we want to achieve here? Right. And I think the acknowledgement that it’s what you hear in the news is actually what you don’t hear in the news. That is a massive accomplishment in this task is something that we’re really aiming for here. And it’s a really helpful like I think reframing in perspective of the significance that the impact that this product could actually have in the digital infrastructure world. So I think that’s one. I think, too, another risk that we’ve got to think about in terms of hurdles to overcome and costs is, you know, how do you like mitigate the misuse or the abuse of a digital? And I think this is some challenging there’s there’s both like legal and like technical legitimate concerns in this domain. Ultimately, what we’re trying to do, and we’re doing this through the IETF, so the Internet Engineering Task Force, is how do you make a standard that is verifiable, revocable and audible? And this is very true in many cybersecurity domains. But these are three sort of core competencies that you want a standard to have, and that can help really sort of scale it and mitigate that misuse. So great question to ask. I don’t know if Samit you have sort of thoughts, too.


Samit D’Chuna: I think that was a great answer. I mean, I think, you know, on these two sides, as you’ve mentioned. So on the question of sort of increased exposure, you know, this has been since the beginning of the project, really a big part of our conversation with with different stakeholders, including sort of cyber actors. And what we understand is that a lot of times. Cyber Actors don’t know whether certain infrastructure is protected, but if they are looking for certain critical infrastructure, tools already exist today that are quite effective in finding them. So, we moved forward much more quickly on this project when we understood that risk to exposure, it exists, but it’s not very high, and the way to sort of mitigate that is through these technical discussions that take place at the IETF and elsewhere to indeed make sure that that risk is minimized as much as possible. And as I mentioned, the emblem is always revocable, right? So, if at any time there’s an entity that thinks that the use of the emblem poses more risks than benefits, then it can be removed, right? Because as I mentioned, it doesn’t replace other cybersecurity tools, it’s not an antivirus, it’s not a firewall, it identifies something as specifically protected, and in that sense, it’s a tool, and it doesn’t replace those other mechanisms. And then, on the question of misuse, I mean, this is why, of course, integration in international humanitarian law is so important. As I said, what we don’t see, the violations that we don’t hear about, or rather the violations that don’t take place, that’s really the key for us, right? Because when we talk about something that’s been attacked, and now a sort of a criminal justice process that takes place after, or we hear about it in the news, or there’s this frustration, or there’s, you know, as I said, that that’s really important, but that’s already a step too far, because what we want is for those attacks not to take place. And when discussions of the distinctive emblem took place 160 years ago, there was already this discussion of, you know, what if the emblem is misused? What’s going to happen if it starts being used on tanks and on all kinds of other things that are not specifically protected? That concern was already there, but systems were put in place, like the work of the ICRC, the confidential bilateral dialogue, you know, the fact that parties to conflict have to be trained in international humanitarian law. We’ve talked about legal integration, and those things are really all quite important. Maybe one thing I would add to that is that the digital emblem requires trust. I talked about the big success with the physical emblem, the Red Cross, Crescent and Crystal. The reason the emblem works so well is the system of trust that exists. The medical services and humanitarian organizations, certain humanitarian organizations, can trust that if they use the emblem, it’s respected. The parties to armed conflict can, the vast majority of the time, see an emblem and trust that that entity, whatever they’re looking at, personnel or object, they can trust that that is, in fact, a protected entity. As new stakeholders join this process, like technology companies, technology companies can also trust that the emblem is something that works and that’s respected. That’s a big key to this project. If it’s going to be successful, it has to reflect, it has to mimic what’s happened with the physical emblem, which is that it has to be a symbol of trust.


Tejas Bharadwaj: Chelsea, any final remarks?


Chelsea Smethurst: Yeah, so I think I’ll say when I think about success for the digital emblem, right, it’s not any one single milestone. I think it’s probably a layered approach across multiple dimensions. We’ve mentioned this a couple of times, right? So technical standardization, legal recognition. And then finally, what’s that multi-stakeholder global adoption? And so, Simi, you mentioned trust. And I think that underpins any and all things that I sort of say in the next sort of segment. Again, I’m focused more on sort of the technical capabilities and implementations here from the Microsoft side. I think, one, let’s go back to technical standardization. I would say as a cohort, we’ve made significant progress here already. So in July at IETF, so the Internet Engineering Task Force, they’re already going to be launching a working group that will be sort of developing verifiable, interoperable and secure standard right across engineering standards bodies. Second, the legal recognition, right? This is something where we’ve been able to work with closely the ICRC and our other sort of non-profit civil society partners to really understand what are the international sort of legal problems and challenges that we as like a company will need to actually incorporate in this? And this is not our domain expertise, right? And so making sure that we’re supporting from a technical and operations perspective, but then working to sort of move towards that international humanitarian law piece has been really essential and critical. And that’s not something we could have done without the ICRC. And then I think finally, and probably maybe most importantly, is that widespread adoption and deployment of the digital emblem, right? So, you know, the last 12 months to date, you know, it’s been a very heavily sort of core exercise, at least with Microsoft and some smaller industry players. You mentioned the Cybersecurity Tech Accord, which is a large sort of industry body committed to cybersecurity norms of over 160 members. And so I think how do we take this emblem and then move it into sort of a global norm would be a very powerful and significant next step for this work. So thank.


Tejas Bharadwaj: I think we just have around 15 minutes, so I’ll open the floor for some questions from the audience. We also have some online questions, but if the audience here have any questions, please feel free to raise your hand. I will identify you. Can you introduce yourself?


Audience: Jure Bokovoy, Finnish Green Party. My question is mostly to Samit. We’ve talked quite largely today about the trust in the emblem and the malicious actors targeting it. How can we even have trust in the emblem in the end, when over the last three years there have been large signatories to the Geneva Conventions, basically completely disregarding its functions? I mean, Russia has bombed multiple hospitals and humanitarian infrastructure in Ukraine, and Israel has bombed 36 hospitals in Gaza, as well as other humanitarian infrastructure in camps. And there hasn’t really been much actual international law punishment towards it, outside of labels put by the ICJ and other organizations, which are not really respected by either the US or the other superpowers, to actually put out the punishment. So how can we trust in the emblem, and what is being done to, I guess, negate this double standard?


Tejas Bharadwaj: Sure. Samit, do you want to answer that?


Samit D’Chuna: Yeah, I’m happy to say that. So thank you so much for the question. I mean, I think it’s a really important question. I’ll take it a little bit more broader than just the emblem, because I think what your question gets to is really more the heart of international humanitarian law. I mean, there’s this body of law, and as you suggest, there are situations where international humanitarian law is not respected, and it’s not just frustrating, I mean, it’s horrific, because people die as a result. I’ll just preface this by saying, you know, as a legal advisor of the International Committee of the Red Cross, I won’t talk about any specific ongoing conflict. We have a confidential bilateral dialogue with parties to conflict. So the states that you mentioned, the ICRC has dialogue with those states. And these are, you know, the key topics that we talk about. So I won’t talk about any specific context. But, you know, I do want to come back to something you mentioned, which is punishment, right? And what punishment assumes is that a violation has already taken place. And that for us is the key, because, of course, that is important. It is important that, you know, international criminal law functions, that there are that there is punishment when violations take place. But it’s not the be all end all of compliance. And I think that’s where a lot of us go wrong on this question, because we assume that, you know, a crime has taken place. And, yeah, unfortunately, in international law, it’s not always punished. But that doesn’t mean that that’s the only way to ensure compliance. So, for example, under international humanitarian law, there is obligations for training on all levels of the armed forces from the individual soldier to the highest level of a commander. Those obligations teach you that it is a violation of international humanitarian law to respect a manifestly unlawful order. If you are ordered to bomb a hospital, you cannot say I am just following orders. That argument died over 80 years ago in international law. So if you committed if you know that you’re committing a violation of international humanitarian law, you must stop, regardless of the orders that you’ve you’ve got from above. As I mentioned, we have a confidential bilateral dialogue. There are several humanitarian organizations outside of the ICRC that also work with parties to conflict, have different modes of actions, highlight violations that take place, highlight when certain infrastructure is protected, where it is, make noise about where there are population movements and things like that. There’s a whole set of ways that ensure compliance with international humanitarian law. Despite the fact that there are all of these compliance mechanisms, violations still take place. That’s true in domestic law as well. People commit crimes, even though there’s this entire legal system in place. Now, under domestic law, there’s an executive, a singular executive body, usually in each state, that then ensures, you know, punishment for certain crimes. But that doesn’t mean that the vast majority of human beings in a country or on the planet respect the law because they’re afraid of going to jail. The vast majority of people respect the rules because it’s the decent thing to do. And so for the minority that are indecent, well, yeah, there are systems in place. But again, I would say, the vast majority of times, the rules are respected. I’ll just say one really interesting thing. There was a study that was done a few years ago, like five, between five and 10 years ago, called the Roots of Restraint. It was a study that looked at what actually makes individuals respect international humanitarian law and how do people feel about the usefulness of international humanitarian law. And the fascinating thing is that in countries that are affected by armed conflict, significantly affected by armed conflict, countries like the Democratic Republic of the Congo, countries like Colombia and others. If you just polled regular people, they said that IHL was incredibly important and that it works. It works. And then if you polled countries like Western countries, Canada, Western European countries. Again, this was done five, 10 years ago. Maybe the answers would be a little bit different today. But at the time, those countries said, well, international law doesn’t work because they’re hearing about violations. All they hear about are violations, whereas it’s people on the ground that see the vast majority of the time rules being respected. I’ll tell you a personal story. You know, I worked in the Democratic Republic of the Congo. One of the things I worked on is recruitment of children, because unfortunately, a lot of children are recruited into armed groups. I’ve met with commanders of groups and There are a lot of ways that international humanitarian law works, but there are invisible ways. It’s not just a question of punishment, although that’s also very important. Thank you.


Tejas Bharadwaj: Thank you. That was a really interesting answer. So, to the lady on the right here.


Audience: Yeah, hello. My name is Mia Kuhlewin. I’m also working in the Internet Engineering Task Force on transport protocols, so I’m very well aware of the work there. And thank you for this presentation. It was very informative and comprehensive, so I really enjoyed that. And I think it’s really nice to see that in the IETF, also these different communities and different stakeholders come together and we are now taking up the work. So, that’s a success in itself, and it’s nice to see that it’s working. You talked a lot, I’m just curious, during the discussion you talked a lot about the risk of exposure, and we all know that also the risk of cyberattacks is increasing more and more. So, just having, and you said this already, just having the emblem will not protect somebody from attacks. So, are you looking at these two angles together? Like, are you also trying to increase the protection of these digital assets and increase how we handle cyberattacks and so on? Or do you think this is like two separate things that need to be worked on separately?


Tejas Bharadwaj: Chelsea, you want to take that?


Chelsea Smethurst: Samit, do you want to answer first from the legal considerations, and then I’ll approach the cyber angle. I think that’s a really great question, by the way.


Samit D’Chuna: Yeah, no, I completely agree. So, yeah, there’s different aspects, right? So, I mean, as I mentioned, the digital emblem is one part of it, It is so essential to victims of armed conflict, of natural disasters and other situations of violence. So that’s a key aspect. And then another aspect that is kind of new for us as well is working with certain cyber actors that we haven’t worked with before. So, you know, we consider, we look at the concept of, let’s say a party to a conflict quite broadly, right? So you can potentially have cyber actors that are sort of either part of the armed forces or belonging to the armed forces that might also be an interlocutor for the ICRC, but not one that we’ve traditionally had because we’ve traditionally worked with arms, like traditional arms carriers, but we are increasingly trying to work with sort of these more non-traditional, let’s say actors or hacker groups. And last year, the ICRC published something called Eight Rules for Hackers. It got quite a bit of traction. Maybe some of you have heard about it. It was kind of published in the BBC and elsewhere, which was basically rules of international humanitarian law that apply to cyber actors when they are engaging in acts as part of a conflict. So yeah, there’s definitely, you know, there’s this entire gamut of work that we’re doing in this sector and all of it towards the same goal of increasing ultimately the protection for victims of armed conflict and others.


Chelsea Smethurst: Yeah, and so the way I think about this question that you’ve asked, and I’m the cyber person up at the table, and so this is something I’ve grappled with quite a lot in the early stages of this project. And I would say it’s half legal exercises and it’s a half cyber exercises. And what do I mean by that is if you look at the requirements half of them are legal. So how do we marry these technical standards to international? , and they’re based on security requirements. This is the traditional cyber security bread and butter domain where we operate as practitioners. I think the way to think about this is do the security requirements support ultimately what we’re trying to achieve in terms of the legal requirements rather than drive first with the security requirements and then come back on the back end with legal requirements. I think it’s a very important question because we’re not driving this as a cyber security initiative, rather it is how do we develop security controls to support the legal requirements that we really need to meet here. So that’s how I think about that distinction in your question.


Tejas Bharadwaj: Right. So we have exactly about four minutes, 45 seconds. So I would take both these questions together and, you know, let the speakers answer that. So the gentleman on the right first, you can quickly, you know. Yes.


Audience: Thank you. Thank you very much. We have heard a lot about the red cross as a protective sign, but there are a few others such as the three orange dots and the white flag, of course. Are there any particular measures that need to be taken for, like, the different type of protective signs, or can they all be handled in the same way in a digital sphere?


Tejas Bharadwaj: Right. So lady on the left here.


Audience: Thank you very much. I would like to start with you. You mentioned the importance of confidential dialogue with states. However, during armed escalations, non-state actors, particularly platforms like Meta and X, play a significant role in shaping narratives and examination of non-state actors’ lives withward. Consequently, the use of leader-to-leader communication in the rural areas is far from better than the most prevailing modality. So, if we consider other steps taken intervention on behalf of these platforms. We documented those things during the India-Pakistan escalation. My question is, does the ICRC engage in confidential dialogue with those companies during times of conflict? And if so, how do you ensure that their algorithmic amplification does not exacerbate the humanitarian catastrophe?


Tejas Bharadwaj: So Samit, if you can go ahead and then Chelsea can follow, I guess.


Samit D’Chuna: Sure, yeah. Thank you so much for the questions. So great question about the different emblems. So you know, when we started our work, of course, we started on the Red Cross Crescent and Crystal. And in the interest of time, I’ve kind of kept the conversation to that. But you’re right that there’s other IHL emblems that exist, and also part of our work. So even though we’re leading this on the Red Cross Crescent and Crystal, there’s of course, as you mentioned, the three orange dots with the three orange circles, rather, which are the dangerous forces emblem. So it represents a danger. So dangerous forces, if it’s attacked, it would release, you know, certain, you know, yeah, what we call dangerous forces, essentially, that would, you know, cause significant harm to the civilian population. So nuclear generating facilities, dams and dikes. So that’s one emblem. There’s another emblem for civil defense, which is a similar emblem, like maybe some of you have heard of the White Helmets in Syria and elsewhere. I mean, there’s civil defense now in the different conflicts that you see around the world that provide certain services. You know, in the event of an armed conflict, they’re also have a, you know, they also have a specific protection under international humanitarian law, and they have an emblem. And then there’s also, you know, what’s colloquially known as the Blue Shield emblem, or the emblem of UNESCO, the cultural property emblem, which is also an emblem that identifies cultural property and also has a different, you know, special protection under international humanitarian law. The key is that the protections are different for each of the emblems. They’re not exactly the same protection. And of course, they’re not for the same thing. So we do have to think about what that means. We’ve been working quite a bit with UNESCO and what’s called an organization called Blue Shield International They also participate in the IETF discussion. So they also bring that in to the conversation. So that’s quite key So so yes, we have thought about the different emblems on harmful and firm on the question of working with tech companies Yeah, I mean we we try to have a dialogue with everyone when we have a dialogue. It is a confidential dialogue We’re really happy to provide assistance particularly in navigating international humanitarian law, which I know can become quite complex So we do talk about that The thing about IHL is it actually doesn’t turn on whether information is true or not necessarily We have a notion called harmful information where the certain spread of information violates IHL And so of course that’s part of our dialogue as well


Tejas Bharadwaj: Chelsea quickly to wrap up


Chelsea Smethurst: I’ll just say it’s been a pleasure being here and presenting on IGF with my partners to me and Tejas Thank you for joining us today and just really encourage others in the industry and civil society to get involved in this work That’s really where it needs to go is to scale beyond just a couple of small companies So a pleasure to be here today and thank you all for your thoughtful questions


Tejas Bharadwaj: Yeah, thank you very much for the audience and also people who have tuned in online and please feel free to ask the speakers You know after the session ends, you know, yeah, thank you so much You Outro Friday Night Live at Chief Stadium USA Provignment at Leeuwin Basketballruce Friday Night Live at Chief Stadium USA Composition by Brandon Holaria


S

Samit D’Chuna

Speech speed

204 words per minute

Speech length

7358 words

Speech time

2158 seconds

Digital emblem needed to protect digital infrastructure used by medical and humanitarian services during conflicts

Explanation

Modern conflicts increasingly involve cyber operations targeting digital infrastructure that medical and humanitarian organizations depend on. A digital emblem is necessary to identify and protect this critical digital infrastructure, similar to how physical emblems protect hospitals and medical facilities.


Evidence

People in conflict zones often ask for connectivity first to contact family members; medical services and humanitarian operations now depend heavily on digital infrastructure for their work


Major discussion point

Digital Emblem Initiative Overview and Purpose


Topics

Cybersecurity | Human rights | Legal and regulatory


Digital emblem should mirror physical emblem functionality while adapting to cyber warfare realities

Explanation

The digital emblem project aims to replicate the successful protection mechanisms of physical emblems in the digital realm. It should maintain the same principles of identification and protection while addressing the unique challenges of cyber operations and digital infrastructure.


Evidence

Physical emblem has worked for 160 years by identifying specifically protected persons and objects; cyber operations are now a reality of armed conflict


Major discussion point

Digital Emblem Initiative Overview and Purpose


Topics

Cybersecurity | Legal and regulatory | Infrastructure


Agreed with

– Chelsea Smethurst

Agreed on

Technical solutions must support legal requirements rather than drive them


Digital emblem requires decentralized use, covert inspection capability, and removability features

Explanation

The digital emblem must function like the physical emblem by allowing parties to conflict to use it without seeking permission from a central authority. It must also allow verification without alerting the entity being inspected and be removable based on security analysis.


Evidence

Physical emblem can be used by any party to conflict without permission; reconnaissance missions can spot emblems without alerting the protected entity; emblems can be removed in exceptional security circumstances


Major discussion point

Digital Emblem Initiative Overview and Purpose


Topics

Cybersecurity | Infrastructure | Legal and regulatory


Agreed with

– Chelsea Smethurst

Agreed on

Risk mitigation through technical design and legal frameworks


Physical Red Cross emblem has 160-year history of success in protecting medical services during conflicts

Explanation

The distinctive emblem system has been largely successful over more than a century and a half in protecting medical personnel and facilities during armed conflicts. While violations occur and make headlines, the vast majority of the time the emblem is respected and works as intended.


Evidence

ICRC colleagues and medical services are protected most of the time; violations that make news represent the minority of cases; emblem has evolved from armbands to ambulances, hospitals, ships, and planes


Major discussion point

Historical Context and Legal Foundation


Topics

Human rights | Legal and regulatory


Henri Dunant’s experience at Solferino battle led to creation of Geneva Conventions and distinctive emblem system

Explanation

The modern humanitarian protection system originated from Henri Dunant’s witness to the carnage at the Battle of Solferino in 1859. His subsequent book proposed creating neutral organizations to assist the wounded and establishing rules to protect medical services, leading to the first Geneva Convention in 1864.


Evidence

Dunant mobilized local population in Castiglione to help wounded soldiers regardless of nationality; convinced French King to release Austro-Hungarian doctors; wrote ‘Un Souvenir de Solferino’ proposing humanitarian organizations and protection rules


Major discussion point

Historical Context and Legal Foundation


Topics

Human rights | Legal and regulatory


Emblem works like a stop sign to identify specifically protected persons and objects under international humanitarian law

Explanation

The emblem serves as a clear visual indicator that communicates specific legal protections to parties in conflict, similar to how traffic signs communicate rules to drivers. It identifies not just civilian status but special protection requiring respect and continued functioning of services.


Evidence

Stop sign analogy – cars need to know where to stop at intersections; emblem identifies ‘respect and protect’ obligations, not just ‘do not target’


Major discussion point

Historical Context and Legal Foundation


Topics

Legal and regulatory | Human rights


Digital emblem requires integration into international humanitarian law through various mechanisms including protocol amendments

Explanation

For the digital emblem to be legally binding and universally recognized, it must be formally incorporated into international humanitarian law. This can be achieved through amending existing protocols, creating new protocols, or other legal mechanisms to ensure common understanding and obligations.


Evidence

Technical annex of Additional Protocol I can be modified; new fourth protocol could be created like the 2005 protocol that established Red Crystal emblem; unilateral declarations are another option


Major discussion point

Legal Integration and Diplomatic Progress


Topics

Legal and regulatory | Human rights


Agreed with

– Chelsea Smethurst

Agreed on

Multi-stakeholder approach essential for digital emblem success


34th International Conference of Red Cross adopted consensus resolution encouraging digital emblem work

Explanation

Despite the current challenging geopolitical context, all states party to the Geneva Conventions reached consensus in supporting continued work on the digital emblem initiative. This represents significant diplomatic progress and international backing for the project.


Evidence

Conference takes place every four years like ‘Olympics of international humanitarian law’; resolution adopted by consensus among all Geneva Convention signatory states in October


Major discussion point

Legal Integration and Diplomatic Progress


Topics

Legal and regulatory | Human rights


Need for adoption by all 196 Geneva Convention signatory states for universal recognition

Explanation

The digital emblem’s effectiveness depends on universal adoption and recognition by all countries that are parties to the Geneva Conventions. This ensures consistent understanding and application of the emblem’s protections across all potential conflict situations.


Evidence

196 states are party to Geneva Conventions; common understanding needed for misuse prevention and proper application


Major discussion point

Legal Integration and Diplomatic Progress


Topics

Legal and regulatory | Human rights


Agreed with

– Chelsea Smethurst
– Tejas Bharadwaj

Agreed on

Digital emblem must balance security with accessibility for global implementation


Physical emblem works because vast majority of time it is respected, violations make headlines but represent minority of cases

Explanation

While attacks on medical facilities and humanitarian workers receive significant media attention, these violations represent a small fraction of interactions with the emblem. The overwhelming majority of the time, parties to conflict respect the emblem and the protections it represents.


Evidence

Personal experience working in conflict zones; ICRC colleagues’ experiences over 160 years; violations make news precisely because they are exceptional


Major discussion point

Trust and Compliance in International Humanitarian Law


Topics

Human rights | Legal and regulatory


Disagreed with

– Tejas Bharadwaj
– Audience

Disagreed on

Effectiveness of emblem systems given current violations


International humanitarian law compliance relies on training, bilateral dialogue, and moral obligation, not just punishment

Explanation

Effective compliance with international humanitarian law comes from multiple mechanisms including mandatory training of armed forces, confidential dialogue with parties to conflict, and the moral imperative to follow rules. Post-violation punishment is important but not the primary compliance mechanism.


Evidence

Obligations for training at all levels of armed forces; soldiers must refuse manifestly unlawful orders like bombing hospitals; confidential bilateral dialogue with state and non-state actors; ‘just following orders’ defense rejected in international law


Major discussion point

Trust and Compliance in International Humanitarian Law


Topics

Legal and regulatory | Human rights


Disagreed with

– Audience

Disagreed on

Primary mechanisms for ensuring compliance with international humanitarian law


Digital emblem success depends on building same system of trust that exists with physical emblem

Explanation

The digital emblem can only be effective if it replicates the trust relationships that make the physical emblem successful. Medical services, humanitarian organizations, parties to conflict, and technology companies must all trust that the system works and is respected.


Evidence

Physical emblem success based on mutual trust between medical services, humanitarian organizations, and parties to conflict; new stakeholders like tech companies must also trust the system


Major discussion point

Trust and Compliance in International Humanitarian Law


Topics

Legal and regulatory | Human rights | Cybersecurity


People in conflict-affected areas report international humanitarian law works effectively despite violations receiving media attention

Explanation

Research shows that populations directly affected by armed conflict have more positive views of international humanitarian law’s effectiveness compared to people in Western countries who primarily hear about violations through media coverage. Those experiencing conflict firsthand see the law working most of the time.


Evidence

Study called ‘Roots of Restraint’ conducted 5-10 years ago; people in DRC, Colombia and other conflict-affected countries said IHL was important and works; Western countries more skeptical because they only hear about violations


Major discussion point

Trust and Compliance in International Humanitarian Law


Topics

Human rights | Legal and regulatory


ICRC engages in confidential bilateral dialogue with both state and non-state actors including cyber groups

Explanation

The ICRC’s mandate includes engaging with all parties to conflict, which now extends to cyber actors and hacker groups that may be part of or affiliated with armed forces. This includes providing guidance on how international humanitarian law applies to cyber operations.


Evidence

ICRC published ‘Eight Rules for Hackers’ that received significant media attention; dialogue extends to non-traditional actors like hacker groups; confidential bilateral dialogue is core ICRC mandate


Major discussion point

Multi-Stakeholder Engagement and Scaling


Topics

Cybersecurity | Legal and regulatory | Human rights


Digital emblem work extends beyond Red Cross to include other IHL emblems like dangerous forces and cultural property symbols

Explanation

The digital emblem initiative encompasses not just the Red Cross, Red Crescent, and Red Crystal emblems, but also other protective symbols under international humanitarian law including those for dangerous forces facilities and cultural property. Each emblem provides different types of protection.


Evidence

Three orange circles emblem for nuclear facilities, dams, and dikes; civil defense emblem for organizations like White Helmets; Blue Shield/UNESCO emblem for cultural property; different protections for each emblem type


Major discussion point

Broader Emblem System Integration


Topics

Legal and regulatory | Human rights | Infrastructure


Different emblems provide different types of protection under international humanitarian law requiring tailored approaches

Explanation

Each protective emblem under international humanitarian law serves a distinct purpose and provides specific protections that are not identical to others. The digital emblem system must account for these differences and provide appropriate technical solutions for each type of protection.


Evidence

Red Cross protects medical services; dangerous forces emblem protects facilities that could harm civilians if attacked; cultural property emblem protects heritage sites; protections are not the same for each emblem


Major discussion point

Broader Emblem System Integration


Topics

Legal and regulatory | Human rights | Sociocultural


Collaboration with UNESCO and Blue Shield International brings cultural property protection into digital sphere

Explanation

The digital emblem project includes partnerships with organizations responsible for protecting cultural property during conflicts. UNESCO and Blue Shield International participate in technical discussions to ensure cultural heritage sites and digital cultural assets receive appropriate protection.


Evidence

UNESCO and Blue Shield International participate in IETF discussions; cultural property emblem also needs digital protection


Major discussion point

Broader Emblem System Integration


Topics

Legal and regulatory | Sociocultural | Infrastructure


C

Chelsea Smethurst

Speech speed

200 words per minute

Speech length

1538 words

Speech time

459 seconds

Three technical implementation options: protected entity flags, digital certificates, and metadata labels

Explanation

The technical community is considering three main approaches for implementing the digital emblem: website address flags that identify protected entities, digital certificates that act like passports for websites, and metadata labels that can be applied to digital files with flexible parameters.


Evidence

Protected entity flags on website addresses; digital certificates as website passports; metadata labels for digital files with flexible parameters


Major discussion point

Digital Emblem Initiative Overview and Purpose


Topics

Infrastructure | Cybersecurity | Legal and regulatory


Digital emblem must be secure enough to prevent misuse while simple enough for developing countries to implement

Explanation

The technical solution faces a dual challenge of providing sufficient security to prevent bad actors from falsely claiming protection while remaining accessible and affordable for humanitarian organizations in resource-constrained environments. The solution must work for the lowest common denominator.


Evidence

Need to prevent entities from pretending to be protected; must not require significant money and technical resources; focus on lowest common denominator for global accessibility


Major discussion point

Technical Implementation Challenges


Topics

Development | Cybersecurity | Infrastructure


Agreed with

– Samit D’Chuna
– Tejas Bharadwaj

Agreed on

Digital emblem must balance security with accessibility for global implementation


Need for verifiable, revocable, and auditable technical standards through IETF working group

Explanation

The digital emblem standard must incorporate three core cybersecurity principles: the ability to verify authenticity, revoke access when needed, and audit usage. These capabilities are essential for preventing misuse and ensuring the system’s integrity.


Evidence

Three core competencies needed in cybersecurity domains; IETF working group developing standards; helps scale and mitigate misuse


Major discussion point

Technical Implementation Challenges


Topics

Cybersecurity | Infrastructure | Legal and regulatory


Agreed with

– Samit D’Chuna

Agreed on

Risk mitigation through technical design and legal frameworks


Risk of increased exposure to malicious actors must be balanced against protection benefits

Explanation

Marking medical and humanitarian digital infrastructure with emblems could potentially make them more visible to malicious actors seeking to cause harm. However, this risk must be weighed against the protection benefits, and technical solutions should minimize exposure while maximizing protection.


Evidence

Marking infrastructure could increase exposure; tools already exist for finding critical infrastructure; risk exists but is not very high; emblem is always revocable


Major discussion point

Technical Implementation Challenges


Topics

Cybersecurity | Human rights


Agreed with

– Samit D’Chuna

Agreed on

Risk mitigation through technical design and legal frameworks


Technical solutions must support legal requirements rather than drive them

Explanation

The digital emblem project should be guided primarily by legal and humanitarian requirements, with technical solutions designed to support these goals rather than letting technical capabilities determine the legal framework. Security controls should enable legal compliance rather than dictate legal terms.


Evidence

Half legal exercises, half cyber exercises; security requirements should support legal requirements; not driving as cybersecurity initiative but supporting legal requirements


Major discussion point

Technical Implementation Challenges


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Samit D’Chuna

Agreed on

Technical solutions must support legal requirements rather than drive them


Disagreed with

– Audience

Disagreed on

Relationship between digital emblem and cybersecurity measures


Cybersecurity Tech Accords with 160+ companies adopted digital emblem pledge for global industry support

Explanation

A major industry coalition representing over 160 of the world’s largest technology companies has formally committed to supporting the digital emblem initiative through a pledge. This represents significant private sector backing and potential for widespread implementation across the tech industry.


Evidence

Tech Accords includes 150-160 companies among biggest tech companies globally; pledge adopted few weeks after International Conference resolution


Major discussion point

Legal Integration and Diplomatic Progress


Topics

Economic | Cybersecurity | Legal and regulatory


Agreed with

– Samit D’Chuna

Agreed on

Multi-stakeholder approach essential for digital emblem success


Success requires technical standardization, legal recognition, and widespread global adoption across multiple dimensions

Explanation

The digital emblem initiative’s success cannot be measured by a single milestone but requires progress across three interconnected areas: developing robust technical standards, achieving legal recognition in international law, and securing widespread adoption by multiple stakeholder groups globally.


Evidence

IETF working group launching in July for technical standards; working with ICRC on international legal problems; need to move beyond core companies to global norm


Major discussion point

Multi-Stakeholder Engagement and Scaling


Topics

Legal and regulatory | Infrastructure | Economic


Agreed with

– Samit D’Chuna

Agreed on

Multi-stakeholder approach essential for digital emblem success


Project needs to scale beyond core companies to achieve global norm status

Explanation

While the digital emblem has gained support from major technology companies, its ultimate success depends on expanding participation beyond the initial core group to become a widely accepted global norm across the entire technology industry and international community.


Evidence

Last 12 months heavily focused on core exercise with Microsoft and smaller industry players; need to scale to global norm through Cybersecurity Tech Accord’s 160+ members


Major discussion point

Multi-Stakeholder Engagement and Scaling


Topics

Economic | Cybersecurity | Legal and regulatory


T

Tejas Bharadwaj

Speech speed

171 words per minute

Speech length

720 words

Speech time

251 seconds

Physical emblem routinely ignored in today’s conflict raises questions about digital emblem’s potential effectiveness

Explanation

Given that the Red Cross emblem, one of the most universally recognized symbols of protection, is frequently violated in contemporary conflicts, there are legitimate concerns about whether a digital emblem will be any more effective. This challenges the assumption that creating a digital version will solve protection problems.


Evidence

Red Cross emblem violations dominate headlines; questioning if digital emblem is just idealistic gesture


Major discussion point

Skepticism About Effectiveness


Topics

Human rights | Legal and regulatory | Cybersecurity


Disagreed with

– Samit D’Chuna
– Audience

Disagreed on

Effectiveness of emblem systems given current violations


Implementation costs and complexity must be minimized for developing countries and smaller organizations

Explanation

The digital emblem initiative must address the financial and technical barriers that could prevent developing countries and smaller humanitarian organizations from implementing the system. Cost and complexity considerations are crucial for ensuring universal accessibility and adoption.


Evidence

Concerns about costs for developing and smaller countries; need for ICRC and tech companies to work on accessibility


Major discussion point

Multi-Stakeholder Engagement and Scaling


Topics

Development | Economic | Infrastructure


Agreed with

– Samit D’Chuna
– Chelsea Smethurst

Agreed on

Digital emblem must balance security with accessibility for global implementation


A

Audience

Speech speed

144 words per minute

Speech length

526 words

Speech time

217 seconds

Major Geneva Convention signatories have targeted hospitals and humanitarian infrastructure without meaningful punishment

Explanation

Recent conflicts have seen large signatory states to the Geneva Conventions deliberately attacking hospitals and humanitarian facilities, with international legal institutions unable to enforce meaningful consequences. This undermines confidence in the entire emblem system and international humanitarian law framework.


Evidence

Russia bombed multiple hospitals in Ukraine; Israel bombed 36 hospitals in Gaza; ICJ and other organizations issue labels but superpowers don’t respect punishment mechanisms


Major discussion point

Skepticism About Effectiveness


Topics

Human rights | Legal and regulatory


Disagreed with

– Samit D’Chuna
– Tejas Bharadwaj

Disagreed on

Effectiveness of emblem systems given current violations


Double standards in international law enforcement undermine trust in emblem system

Explanation

The inconsistent application and enforcement of international humanitarian law, particularly when major powers are involved, creates a credibility problem for protective symbols like emblems. Without consistent enforcement, the legal framework loses its deterrent effect and moral authority.


Evidence

Large signatories disregarding Geneva Convention functions; lack of actual punishment outside of labels from international organizations


Major discussion point

Skepticism About Effectiveness


Topics

Legal and regulatory | Human rights


Disagreed with

– Samit D’Chuna

Disagreed on

Primary mechanisms for ensuring compliance with international humanitarian law


Digital emblem relationship to broader cybersecurity protection measures needs clarification

Explanation

There is uncertainty about how the digital emblem initiative relates to existing cybersecurity measures and whether it should be developed as part of a comprehensive cyber protection strategy or as a separate legal instrument. The relationship between identification and actual security protection requires clarification.


Evidence

Risk of cyberattacks increasing; emblem alone won’t protect from attacks; question whether two separate things or should be worked on together


Major discussion point

Technical and Security Considerations


Topics

Cybersecurity | Infrastructure


Disagreed with

– Chelsea Smethurst

Disagreed on

Relationship between digital emblem and cybersecurity measures


Platform companies’ role in conflict narrative shaping requires engagement on algorithmic amplification issues

Explanation

Social media platforms like Meta and X play significant roles in shaping conflict narratives and information flow during armed conflicts. Their algorithmic systems can amplify or suppress information in ways that may exacerbate humanitarian crises, requiring specific engagement and dialogue.


Evidence

Platforms play significant role in shaping narratives during armed escalations; documented during India-Pakistan escalation; algorithmic amplification can exacerbate humanitarian catastrophe


Major discussion point

Technical and Security Considerations


Topics

Sociocultural | Human rights | Legal and regulatory


Integration with existing internet protocols and standards presents both opportunities and challenges

Explanation

The digital emblem must work within the existing internet infrastructure and standards framework, which creates both opportunities for widespread adoption and technical challenges for implementation. The IETF process represents progress but also highlights the complexity of integrating humanitarian law with technical standards.


Evidence

IETF transport protocols work; different communities and stakeholders coming together; success in itself that IETF is taking up the work


Major discussion point

Technical and Security Considerations


Topics

Infrastructure | Legal and regulatory | Cybersecurity


Agreements

Agreement points

Digital emblem must balance security with accessibility for global implementation

Speakers

– Samit D’Chuna
– Chelsea Smethurst
– Tejas Bharadwaj

Arguments

Digital emblem must be secure enough to prevent misuse while simple enough for developing countries to implement


Need for adoption by all 196 Geneva Convention signatory states for universal recognition


Implementation costs and complexity must be minimized for developing countries and smaller organizations


Summary

All speakers agree that the digital emblem must be technically robust enough to prevent misuse while remaining accessible and affordable for humanitarian organizations in resource-constrained environments globally


Topics

Development | Cybersecurity | Infrastructure


Multi-stakeholder approach essential for digital emblem success

Speakers

– Samit D’Chuna
– Chelsea Smethurst

Arguments

Digital emblem requires integration into international humanitarian law through various mechanisms including protocol amendments


Success requires technical standardization, legal recognition, and widespread global adoption across multiple dimensions


Cybersecurity Tech Accords with 160+ companies adopted digital emblem pledge for global industry support


Summary

Both speakers emphasize that success requires coordinated efforts across legal, technical, and industry domains with broad international participation


Topics

Legal and regulatory | Economic | Cybersecurity


Technical solutions must support legal requirements rather than drive them

Speakers

– Samit D’Chuna
– Chelsea Smethurst

Arguments

Digital emblem should mirror physical emblem functionality while adapting to cyber warfare realities


Technical solutions must support legal requirements rather than drive them


Summary

Both speakers agree that the project should be guided primarily by legal and humanitarian requirements, with technical solutions designed to support these goals


Topics

Legal and regulatory | Cybersecurity


Risk mitigation through technical design and legal frameworks

Speakers

– Samit D’Chuna
– Chelsea Smethurst

Arguments

Risk of increased exposure to malicious actors must be balanced against protection benefits


Need for verifiable, revocable, and auditable technical standards through IETF working group


Digital emblem requires decentralized use, covert inspection capability, and removability features


Summary

Both speakers acknowledge risks exist but can be mitigated through careful technical design that incorporates security principles and maintains flexibility for users


Topics

Cybersecurity | Infrastructure | Legal and regulatory


Similar viewpoints

Both speakers emphasize that trust is fundamental to the emblem system’s effectiveness and that widespread adoption is necessary to replicate the success of the physical emblem

Speakers

– Samit D’Chuna
– Chelsea Smethurst

Arguments

Digital emblem success depends on building same system of trust that exists with physical emblem


Project needs to scale beyond core companies to achieve global norm status


Topics

Legal and regulatory | Human rights | Economic


Both express skepticism about the effectiveness of emblem systems given current violations and enforcement challenges in international humanitarian law

Speakers

– Tejas Bharadwaj
– Audience

Arguments

Physical emblem routinely ignored in today’s conflict raises questions about digital emblem’s potential effectiveness


Major Geneva Convention signatories have targeted hospitals and humanitarian infrastructure without meaningful punishment


Topics

Human rights | Legal and regulatory


Both recognize the need to engage with non-traditional actors in the digital space, including tech companies and cyber actors, as part of humanitarian protection efforts

Speakers

– Samit D’Chuna
– Audience

Arguments

ICRC engages in confidential bilateral dialogue with both state and non-state actors including cyber groups


Platform companies’ role in conflict narrative shaping requires engagement on algorithmic amplification issues


Topics

Cybersecurity | Legal and regulatory | Human rights


Unexpected consensus

Integration of multiple emblem types beyond Red Cross into digital sphere

Speakers

– Samit D’Chuna
– Audience

Arguments

Digital emblem work extends beyond Red Cross to include other IHL emblems like dangerous forces and cultural property symbols


Integration with existing internet protocols and standards presents both opportunities and challenges


Explanation

There was unexpected consensus that the digital emblem project should encompass all types of protective emblems under international humanitarian law, not just medical emblems, showing broader scope than initially apparent


Topics

Legal and regulatory | Infrastructure | Sociocultural


Acknowledgment of emblem system limitations while maintaining support

Speakers

– Samit D’Chuna
– Tejas Bharadwaj
– Audience

Arguments

Physical emblem works because vast majority of time it is respected, violations make headlines but represent minority of cases


Physical emblem routinely ignored in today’s conflict raises questions about digital emblem’s potential effectiveness


Double standards in international law enforcement undermine trust in emblem system


Explanation

Despite raising serious concerns about violations and enforcement, there was unexpected consensus that the emblem system still has value and should be extended to digital realm, showing pragmatic acceptance of imperfect but useful tools


Topics

Human rights | Legal and regulatory


Overall assessment

Summary

Strong consensus exists among speakers on technical requirements, multi-stakeholder approach, and need for global accessibility, with shared recognition of both opportunities and challenges


Consensus level

High level of consensus on implementation approach and technical requirements, with constructive skepticism about effectiveness challenges that strengthens rather than undermines the initiative. The agreement spans legal, technical, and practical dimensions, suggesting robust foundation for moving forward despite acknowledged limitations of current international humanitarian law enforcement.


Differences

Different viewpoints

Effectiveness of emblem systems given current violations

Speakers

– Samit D’Chuna
– Tejas Bharadwaj
– Audience

Arguments

Physical emblem works because vast majority of time it is respected, violations make headlines but represent minority of cases


Physical emblem routinely ignored in today’s conflict raises questions about digital emblem’s potential effectiveness


Major Geneva Convention signatories have targeted hospitals and humanitarian infrastructure without meaningful punishment


Summary

Samit argues the physical emblem is largely successful with violations being exceptional cases that receive disproportionate media attention, while Tejas and audience members express skepticism about effectiveness given high-profile violations and lack of enforcement


Topics

Human rights | Legal and regulatory


Primary mechanisms for ensuring compliance with international humanitarian law

Speakers

– Samit D’Chuna
– Audience

Arguments

International humanitarian law compliance relies on training, bilateral dialogue, and moral obligation, not just punishment


Double standards in international law enforcement undermine trust in emblem system


Summary

Samit emphasizes multiple compliance mechanisms beyond punishment including training and dialogue, while audience members focus on the lack of meaningful enforcement and punishment as undermining the entire system


Topics

Legal and regulatory | Human rights


Relationship between digital emblem and cybersecurity measures

Speakers

– Chelsea Smethurst
– Audience

Arguments

Technical solutions must support legal requirements rather than drive them


Digital emblem relationship to broader cybersecurity protection measures needs clarification


Summary

Chelsea argues the digital emblem should be primarily a legal tool with technical solutions supporting legal requirements, while audience members question whether it should be integrated with broader cybersecurity protection measures


Topics

Cybersecurity | Legal and regulatory | Infrastructure


Unexpected differences

Role of platform companies in conflict situations

Speakers

– Samit D’Chuna
– Audience

Arguments

ICRC engages in confidential bilateral dialogue with both state and non-state actors including cyber groups


Platform companies’ role in conflict narrative shaping requires engagement on algorithmic amplification issues


Explanation

While both acknowledge the need to engage with tech platforms, there’s an unexpected gap in how they view the scope of engagement – Samit focuses on traditional IHL compliance dialogue, while audience members raise concerns about algorithmic amplification of conflict narratives, which represents a newer dimension of platform responsibility that wasn’t fully addressed


Topics

Sociocultural | Human rights | Legal and regulatory


Overall assessment

Summary

The discussion revealed moderate disagreements primarily around the effectiveness and enforcement of international humanitarian law, with speakers generally aligned on goals but differing on implementation approaches and risk assessments


Disagreement level

The disagreements are substantive but not fundamental – all parties support the digital emblem concept but have different perspectives on its likely effectiveness, implementation priorities, and relationship to broader cybersecurity measures. The skepticism from moderator and audience members serves as a healthy counterbalance to the more optimistic views of the project leaders, highlighting real challenges that need to be addressed for successful implementation.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasize that trust is fundamental to the emblem system’s effectiveness and that widespread adoption is necessary to replicate the success of the physical emblem

Speakers

– Samit D’Chuna
– Chelsea Smethurst

Arguments

Digital emblem success depends on building same system of trust that exists with physical emblem


Project needs to scale beyond core companies to achieve global norm status


Topics

Legal and regulatory | Human rights | Economic


Both express skepticism about the effectiveness of emblem systems given current violations and enforcement challenges in international humanitarian law

Speakers

– Tejas Bharadwaj
– Audience

Arguments

Physical emblem routinely ignored in today’s conflict raises questions about digital emblem’s potential effectiveness


Major Geneva Convention signatories have targeted hospitals and humanitarian infrastructure without meaningful punishment


Topics

Human rights | Legal and regulatory


Both recognize the need to engage with non-traditional actors in the digital space, including tech companies and cyber actors, as part of humanitarian protection efforts

Speakers

– Samit D’Chuna
– Audience

Arguments

ICRC engages in confidential bilateral dialogue with both state and non-state actors including cyber groups


Platform companies’ role in conflict narrative shaping requires engagement on algorithmic amplification issues


Topics

Cybersecurity | Legal and regulatory | Human rights


Takeaways

Key takeaways

The Digital Emblem Initiative aims to create a universally-recognized symbol for protecting digital infrastructure used by medical and humanitarian services during armed conflicts, mirroring the success of the physical Red Cross emblem


Three technical implementation approaches are being considered: protected entity flags on website addresses, digital certificates as ‘passports for websites’, and metadata labels on digital files


The digital emblem must be decentralized (no central authority controls usage), support covert inspection (can be checked without alerting the protected entity), and be removable based on security analysis


Significant diplomatic progress has been achieved with consensus adoption at the 34th International Conference of Red Cross and Red Crescent, and support from 160+ tech companies through the Cybersecurity Tech Accords


The physical Red Cross emblem has been successful for 160 years because it works the vast majority of the time – violations make headlines but represent a minority of cases


Success requires building a system of trust similar to the physical emblem, with technical standardization through IETF, legal integration into international humanitarian law, and widespread global adoption


The initiative must be simple and cost-effective enough for developing countries and smaller organizations to implement, avoiding creating barriers to access


Digital emblem work extends beyond Red Cross to include other IHL emblems like dangerous forces, civil defense, and cultural property symbols, each requiring tailored protection approaches


Resolutions and action items

IETF working group will launch in July to develop verifiable, interoperable and secure technical standards for the digital emblem


ICRC will continue annual meetings with states to update on technical development and advance integration into international humanitarian law


Legal integration will be pursued through multiple mechanisms including amending Additional Protocol I technical annex, creating a new fourth protocol, or unilateral state declarations


Australian Red Cross will lead work with national societies worldwide to integrate digital emblem into domestic legal systems


Industry scaling beyond core companies is needed to achieve global norm status through broader tech sector engagement


Continued confidential bilateral dialogue with both state and non-state actors, including cyber groups, to ensure compliance and proper implementation


Unresolved issues

How to effectively address skepticism about digital emblem effectiveness given current violations of physical emblem protections in ongoing conflicts


Balancing security requirements to prevent misuse while maintaining simplicity for resource-constrained organizations in developing countries


Managing the risk of increased exposure to malicious actors when marking protected digital infrastructure


Determining the relationship between digital emblem protection and broader cybersecurity measures – whether they should be integrated or remain separate approaches


Addressing the role of social media platforms and their algorithmic amplification in conflict situations and how this relates to digital emblem protection


Ensuring universal adoption across all 196 Geneva Convention signatory states despite varying technical capabilities and political positions


Resolving double standards in international law enforcement that undermine trust in the emblem system


Clarifying how different types of IHL emblems (Red Cross, dangerous forces, cultural property) will be technically implemented and distinguished in the digital sphere


Suggested compromises

Digital emblem designed as a tool that can be removed if security analysis shows risks outweigh benefits, maintaining flexibility for protected entities


Multiple technical implementation pathways being developed simultaneously to accommodate different organizational needs and capabilities


Phased approach starting with core stakeholders and gradually scaling to achieve global adoption rather than requiring universal implementation from the start


Integration into international law through multiple mechanisms to accommodate states that are not party to all Geneva Convention protocols


Balancing security and accessibility by developing standards that are ‘secure enough to prevent misuse’ while ‘simple enough for humanitarian organizations in developing countries to use’


Thought provoking comments

The Red Cross emblem, one of the most universally recognized symbols of protection, is kind of routinely ignored in today’s conflict. Why should anyone believe that a digital emblem will fare any better? Is it simply just another idealistic gesture and a world where violations, not protections, dominate the headlines?

Speaker

Tejas Bharadwaj


Reason

This comment cuts to the heart of the initiative’s credibility by directly challenging the fundamental premise. It forces the discussion beyond technical implementation to address the elephant in the room – whether symbolic protection has any real-world efficacy when violations make headlines daily.


Impact

This question fundamentally shifted the discussion from ‘how’ to implement the digital emblem to ‘why’ it would work at all. It prompted Samit to provide crucial context about the emblem’s actual success rate and introduced the key insight that violations make news precisely because they’re exceptional, not routine. This reframing became central to understanding the project’s value proposition.


The vast majority of the time the emblem is respected… what we see in the news are violations of international humanitarian law… it’s important to remember that the vast majority of the time the emblem does in fact work

Speaker

Samit D’Chuna


Reason

This insight challenges our perception bias by explaining that we hear about violations precisely because they’re newsworthy exceptions, not the norm. It reframes the entire discussion about the emblem’s effectiveness by highlighting the invisible successes versus visible failures.


Impact

This comment provided the foundational justification for the entire digital emblem project. It shifted the conversation from defensive to confident, establishing that the physical emblem’s success model could be replicated digitally. Chelsea later referenced this insight when discussing the project’s goals, showing how it became a cornerstone argument.


How can we even have trust in the emblem in the end, when over the last three years there have been large signatories to the Geneva Conventions, basically completely disregarding its functions? … And there hasn’t really been much actual international law punishment towards it

Speaker

Jure Bokovoy (Audience)


Reason

This comment represents the skeptical voice of many observers who see high-profile violations and question the entire system’s credibility. It forces a deeper examination of how international humanitarian law actually works beyond punishment mechanisms.


Impact

This challenge prompted Samit to provide one of the most profound explanations of how international law actually functions – not primarily through punishment but through training, dialogue, and the fundamental decency of most actors. It led to the powerful personal anecdote about the ‘Roots of Restraint’ study, showing how those actually affected by conflict view IHL’s effectiveness differently than distant observers.


The reason the color red was chosen was actually because if you’re a wounded soldier, if you’re a war medic, then you actually always have the access to the color red and you always have access to white because soldiers usually carry the flag of surrender… everyone should be able to use the emblem and there shouldn’t be any sort of barriers to the creation of the emblem

Speaker

Samit D’Chuna


Reason

This historical insight reveals the profound practical wisdom embedded in the original emblem design – accessibility and universality were built into the system from the beginning. It connects 19th-century thinking to modern digital challenges.


Impact

This comment directly influenced the technical discussion by reinforcing Chelsea’s point about the ‘lowest common denominator’ requirement. It provided historical validation for making the digital emblem accessible to humanitarian organizations in developing countries, showing how practical accessibility has always been central to the emblem’s success.


If you are marking medical and humanitarian digital infrastructure, could you inadvertently make them more exposed to malicious actors?… what you hear in the news is actually what you don’t hear in the news. That is a massive accomplishment

Speaker

Chelsea Smethurst


Reason

This comment acknowledges a genuine technical and strategic dilemma while reframing success metrics. It shows sophisticated thinking about unintended consequences while embracing the counterintuitive idea that ‘not making news’ is the goal.


Impact

This comment bridged the technical and humanitarian perspectives, showing how the technology sector is grappling with the same fundamental questions about visibility and protection. It reinforced Samit’s earlier point about invisible successes and helped establish shared understanding between the legal and technical approaches.


We’re not driving this as a cyber security initiative, rather it is how do we develop security controls to support the legal requirements that we really need to meet here

Speaker

Chelsea Smethurst


Reason

This comment reveals a crucial philosophical approach – subordinating technical capabilities to humanitarian law requirements rather than the reverse. It shows how the project maintains its humanitarian focus despite technical complexity.


Impact

This clarification helped distinguish the digital emblem project from typical cybersecurity initiatives, maintaining focus on humanitarian protection rather than technical security. It reinforced that this is fundamentally a humanitarian law project that happens to use technology, not a tech project with humanitarian applications.


Overall assessment

These key comments shaped the discussion by systematically addressing the fundamental challenges to the digital emblem’s credibility and feasibility. The conversation evolved from initial skepticism about symbolic protection in a world of visible violations, through historical and empirical evidence of the physical emblem’s success, to sophisticated technical and legal considerations for digital implementation. The most impactful moments came when speakers reframed common assumptions – that violations make news precisely because they’re exceptional, that international law works primarily through training and dialogue rather than punishment, and that technical solutions must serve humanitarian law rather than drive it. This progression created a compelling narrative arc from doubt to understanding, establishing both the necessity and feasibility of the digital emblem initiative.


Follow-up questions

How can trust in the digital emblem be maintained when major signatories to the Geneva Conventions are disregarding physical emblems and targeting hospitals and humanitarian infrastructure without meaningful international punishment?

Speaker

Jure Bokovoy (Finnish Green Party audience member)


Explanation

This addresses a fundamental challenge to the entire premise of the digital emblem project – if physical emblems are being ignored by major powers, why would digital emblems be respected?


Should cybersecurity protection measures be developed alongside the digital emblem, or are these separate initiatives that need to be worked on independently?

Speaker

Mia Kuhlewin (IETF transport protocols worker)


Explanation

This explores whether the digital emblem should be part of a broader cybersecurity framework or remain focused solely on identification and legal protection


How should different types of protective emblems (three orange dots, white flag, cultural property emblems) be handled in the digital sphere – can they use the same technical approach or do they need different measures?

Speaker

Unnamed audience member


Explanation

This addresses the scalability and technical requirements for implementing multiple types of protective emblems digitally, each with different legal protections


Does the ICRC engage in confidential dialogue with social media platforms during conflicts, and how do they ensure algorithmic amplification doesn’t exacerbate humanitarian catastrophes?

Speaker

Unnamed audience member


Explanation

This explores the ICRC’s engagement with non-state tech actors and their role in information warfare during conflicts


What are the specific technical implementation details for embedding digital emblems into different types of digital infrastructure across various countries and organizations?

Speaker

Tejas Bharadwaj (moderator)


Explanation

This addresses the practical challenges of global deployment and standardization across diverse technical environments


How will the costs and technical barriers be minimized for developing countries and smaller organizations to implement the digital emblem?

Speaker

Tejas Bharadwaj (moderator)


Explanation

This addresses equity and accessibility concerns to ensure the digital emblem doesn’t create a two-tiered system of protection


What specific legal mechanisms will be used to integrate the digital emblem into international humanitarian law – protocol amendments, new protocols, or other approaches?

Speaker

Tejas Bharadwaj (moderator)


Explanation

This addresses the concrete legal pathways needed to make the digital emblem legally binding and enforceable


How will the digital emblem project scale beyond the current core group of companies and organizations to achieve global adoption?

Speaker

Chelsea Smethurst (Microsoft)


Explanation

This addresses the challenge of moving from a pilot project with a few major tech companies to widespread global implementation


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #18 Digital Cooperation for Development Ungis in Action

Open Forum #18 Digital Cooperation for Development Ungis in Action

Session at a glance

Summary

This discussion focused on the United Nations agencies’ collaborative efforts in implementing the World Summit on the Information Society (WSIS) process as it approaches its 20-year review, particularly in relation to the Global Digital Compact (GDC). The session was moderated by representatives from the International Telecommunication Union (ITU) and featured participation from multiple UN agencies working together through the UN Group on the Information Society (UNGIS).


UNESCO, serving as the current chair of UNGIS, emphasized the enduring relevance of WSIS’s technology-neutral framework, which has successfully adapted across successive waves of digital innovation from early internet adoption to today’s artificial intelligence era. The organization highlighted its role as lead facilitator for six WSIS action lines and its commitment to ensuring digital transformation serves humanity rather than the reverse. UNDP, as vice-chair of UNGIS, outlined its extensive on-ground presence in over 170 countries, supporting governments with digital transformation initiatives, AI assessments, and digital public infrastructure development.


Regional UN commissions played a significant role in the discussion, with representatives from Africa (UNECA), Latin America (ECLAC), and Asia-Pacific (UNESCAP) describing their regional implementation efforts. The African perspective particularly emphasized the need for continued WSIS expansion and addressing connectivity challenges, while ECLAC presented innovative tools like digital complexity simulators to support productive digital transformation.


Specialized agencies including WIPO, UNIDO, FAO, and UNU described their sector-specific contributions to digital development, from intellectual property databases and AI-powered manufacturing solutions to digital agriculture initiatives and academic research networks. A key theme throughout the discussion was the importance of avoiding duplication between WSIS and the GDC while ensuring coherent integration of both frameworks. The session concluded with stakeholder questions about digital health access in marginalized communities, demonstrating the practical implementation challenges these agencies collectively address in their coordinated approach to global digital cooperation.


Keypoints

## Major Discussion Points:


– **UN Agency Coordination and Collaboration**: The discussion emphasized how various UN agencies (UNESCO, UNDP, UNCTAD, UNECA, ECLAC, UNU, WIPO, UNIDO, FAO, UNESCAP) are working together through UNGIS (United Nations Group on the Information Society) to avoid duplication and ensure coordinated implementation of digital development initiatives across different regions and sectors.


– **WSIS Plus 20 Review Process**: A central focus was the 20-year review of the World Summit on the Information Society (WSIS), highlighting how the framework has remained relevant and evolved with technological changes from early internet adoption through AI, and the need to strengthen it as a multi-stakeholder platform for digital cooperation.


– **Integration of Global Digital Compact (GDC) with WSIS Architecture**: Multiple speakers stressed the importance of integrating GDC commitments into the existing WSIS framework to avoid duplication and ensure a cohesive approach to digital cooperation, as recognized in recent ECOSOC resolutions.


– **Regional Implementation and Capacity Building**: Significant attention was given to how regional UN commissions (UNECA for Africa, ECLAC for Latin America, UNESCAP for Asia-Pacific) are implementing WSIS action lines at regional levels, with emphasis on addressing connectivity challenges, digital skills development, and supporting developing countries in digital transformation.


– **Practical Applications and Innovation**: Discussion of concrete initiatives including AI for development, digital public infrastructure, technology transfer, startup incubation, digital health solutions, and evidence-based policy tools that demonstrate how UN agencies are translating global frameworks into on-ground impact.


## Overall Purpose:


The discussion aimed to showcase how UN agencies are collaboratively implementing the WSIS process and preparing for its 20-year review, while demonstrating coordination mechanisms to avoid duplication and ensure effective integration with newer frameworks like the Global Digital Compact.


## Overall Tone:


The tone was consistently professional, collaborative, and forward-looking throughout the session. Speakers demonstrated mutual respect and emphasized partnership approaches. The moderator maintained an encouraging and inclusive atmosphere, and there was a strong sense of shared purpose among participants. The tone remained constructive even when technical difficulties occurred with remote participants, and concluded on a positive note with appreciation for collaboration and an invitation for group photography.


Speakers

**Speakers from the provided list:**


– **Moderator** – Gitanjali (ITU representative), coordinates UN agency collaboration and WSIS process implementation


– **Participant 1** – UNESCO representative, Chair of UNGIS (United Nations Group on Information Society), lead facilitator for six WSIS action lines


– **Yu-Ping Lien** – UNDP (United Nations Development Program) representative, Vice-Chair of UNGIS, focuses on digital cooperation and AI for sustainable development


– **Liping Zhang** – UNCTAD CSTD representative, involved in WSIS plus 20 review process and ECOSOC resolutions


– **Participant 3** – Maghtar, UN regional commission representative (likely UNECA), works on African digital development and Internet Governance


– **Sebastian Rovira** – UN ECLAC (Economic Commission for Latin America and the Caribbean) representative, Vice-Chair of UNGIS, focuses on digital transformation in Latin America


– **Morten Langfeldt Dahlback Rapler** – United Nations University (UNU) representative, works on technology research, capacity building, and policy advice


– **Richard Gooch** – WIPO (World Intellectual Property Organization) representative, focuses on intellectual property and innovation databases


– **Jason Slater** – UNIDO (United Nations Industrial Development Organization) representative, Chief AI Innovation Digital Officer, co-chair of Global Digital Compact implementation


– **Dejan Jakovljevic** – FAO (Food and Agriculture Organization) representative, WSIS Champion, focuses on digital transformation in agri-food systems


– **Participant 4** – Siope, UNESCAP representative from Thailand, works on Asia-Pacific regional digital cooperation


– **Audience** – Multiple audience members including:


– Tsolofelo Mugoni – Internet Governance Coordinator from South Africa


– Ashling Lynch-Kelly – Foundation The London Story representative, Indian diaspora-led human and digital rights organization


**Additional speakers:**


None identified beyond those in the provided speakers names list.


Full session report

# UN Agency Collaboration in WSIS Implementation and Global Digital Compact Integration


## Executive Summary


This discussion brought together representatives from multiple United Nations agencies to examine their collaborative efforts in implementing the World Summit on the Information Society (WSIS) process as it approaches its 20-year review milestone. Moderated by Gitanjali from ITU and featuring participation from UNESCO, UNDP, UNCTAD, regional UN commissions, and specialized agencies, the session focused on coordinated approaches through the UN Group on the Information Society (UNGIS) to integrate the recently adopted Global Digital Compact (GDC) while avoiding duplication of efforts.


*Note: This summary is based on a transcript with significant technical audio issues that affected the clarity of several speakers’ contributions, particularly from UNESCO. Some speaker identifications and detailed content may be incomplete due to these recording limitations.*


## Opening Framework


Gitanjali from ITU established the context of WSIS as a unique multi-stakeholder platform that has evolved over two decades. She emphasized that UNGIS serves as a coordination mechanism bringing UN agencies together to drive digital transformation while avoiding duplication. The timing was significant as member states prepare for the WSIS Plus 20 review process and work to integrate Global Digital Compact commitments into existing frameworks.


The moderator highlighted the challenge of ensuring that diplomats and stakeholder communities understand how WSIS has evolved with technology over the past 20 years, moving beyond basic connectivity to encompass artificial intelligence, digital rights, and comprehensive digital transformation.


## Agency Perspectives


### UNESCO’s Coordinating Role


As current chair of UNGIS and lead facilitator for six WSIS action lines, UNESCO’s representative (Participant 1) emphasized that digital transformation must serve humanity rather than the reverse. They highlighted WSIS’s technology-neutral framework, which has successfully adapted across successive waves of digital innovation. UNESCO mentioned producing evidence-based policy tools, including Internet Universality indicators and AI governance assessments to support member states.


*Note: Large portions of UNESCO’s contribution were affected by audio quality issues in the original recording.*


### UNDP’s Development Approach


Yu-Ping Lien from UNDP, serving as vice-chair of UNGIS, outlined the organization’s presence in over 170 countries and support for over 130 countries with digital and AI programs designed to advance sustainable development goals. UNDP has developed a digital public infrastructure approach emphasizing interoperable, inclusive, and rights-based digital transformation through AI readiness assessments and capacity-building initiatives.


Yu-Ping acknowledged that “it is a difficult time for the multilateral system and the international collaborative spirit,” emphasizing the need to leverage existing institutions and collaborative partnerships that deliver measurable impact.


## Regional Implementation


### African Priorities


The UNECA representative presented African perspectives, requesting expansion of WSIS for the next 10 years in Africa alongside continuation of the Internet Governance Forum. They emphasized Africa’s ongoing connectivity and infrastructure challenges, including electricity access issues that impact digital development. The representative called for mechanisms to integrate WSIS with the Global Digital Compact while establishing evaluation and monitoring systems, and requested institutionalization of the Internet Governance Forum.


### Latin American Innovation


Sebastian Rovira from UN ECLAC, joining virtually, presented approaches for the Latin American region, including digital complexity simulators to support productive digital transformation. ECLAC’s digital agenda aligns with the WSIS process while focusing on productive development and digital transformation challenges unique to the region.


### Asia-Pacific Cooperation


Siope from UNESCAP, joining from Thailand, outlined the Asia-Pacific region’s approach to promoting regional cooperation through steering committee meetings and best practice sharing, emphasizing collaborative learning and knowledge exchange.


## Specialized Agency Contributions


### WIPO – Intellectual Property and Innovation


Richard Gooch from WIPO provided data on AI-related patent applications, which increased by 3,000% between 2010 and 2024, with a 60% increase between 2021 and 2022. WIPO’s databases serve as resources for innovation and technology transfer, particularly through Technology and Innovation Support Centers helping innovators in developing countries access patent information.


### UNIDO – Industrial Applications


Jason Slater from UNIDO, serving as Chief AI Innovation Digital Officer and co-chair of Global Digital Compact implementation, presented applications of digital transformation in manufacturing. UNIDO helps small and medium enterprises integrate AI-powered solutions into production lines to improve energy efficiency and productivity. The ScaleX program supports startups and scales solutions for member states through accelerator programs and innovation challenges.


### FAO – Agricultural Transformation


Dejan Jakovljevic from FAO, recognized as a WSIS Champion, emphasized digital transformation of agri-food systems and production of digital public goods. FAO advocates for fundamental transformation of agricultural systems rather than simple efficiency improvements, addressing challenges like food security through integrated digital solutions.


### UNU – Research and Capacity Building


Morten Langfeldt Dahlback Rapler from United Nations University outlined UNU’s contributions through independent research, capacity building, and policy advice. UNU provides intellectual foundation for evidence-based digital governance, with approximately 1,000 experts across multiple global locations.


## Integration and Coordination Challenges


### Global Digital Compact Integration


Liping Zhang from UNCTAD CSTD emphasized that recent ECOSOC resolutions expect UNGIS to play a bigger role in the WSIS Plus 20 review process and develop implementation mapping for GDC integration. The consensus was that UNGIS provides the appropriate coordination mechanism for this integration, leveraging existing inter-agency relationships rather than creating new structures.


### Avoiding Duplication


Multiple speakers emphasized avoiding duplication between WSIS and GDC implementation, reflecting broader multilateral system challenges where overlapping mandates can reduce effectiveness. The solution involves leveraging existing successful mechanisms like UNGIS while adapting to new requirements.


## Audience Engagement


The discussion included audience questions, notably from Tsolofelo Mugoni from South Africa and Ashling Lynch-Kelly representing Foundation The London Story, who highlighted challenges in digital health implementation in India. Lynch-Kelly noted that while digital health democratizes healthcare access, marginalized communities often remain excluded, emphasizing the gap between policy aspirations and ground reality.


## Next Steps and Future Directions


The discussion identified several priorities:


– Enhanced UNGIS role in the WSIS Plus 20 review process


– Development of comprehensive implementation mapping for GDC integration


– Continued regional engagement through ministerial conferences


– Expansion of joint capacity building programs


– Youth engagement through networking events, including a youth party mentioned at the ITU Maubriant building


## Conclusion


This discussion demonstrated ongoing UN inter-agency collaboration in digital development while highlighting challenges in ensuring inclusive digital transformation. The emphasis on leveraging existing mechanisms like UNGIS rather than creating new structures reflects both resource management considerations and recognition of proven partnerships. The path forward requires continued attention to inclusive implementation and development of concrete mechanisms for measuring progress, building on the foundation of 20 years of WSIS implementation.


Session transcript

Moderator: Good afternoon, ladies and gentlemen. This is after lunch. Try to keep it as interesting as possible. We have our UN agency colleagues here with us today to talk a bit more about what the UN is doing with reference to the WSIS process. As you all know, the WSIS is a UN process because we have the UNGA resolutions, ECOSOC resolutions. Within our own UN agencies, we have several resolutions that our membership have approved. So we are here to tell you more about what we are doing and what we think of the future of the WSIS process. The United Nations Group on the Information Society, it plays a pivotal role in advancing the mandates of WSIS on digital for development. It was created by the chief executive board to ensure that the UN system works together to drive digital transformation and sustainable development. We are a very effective and well-coordinated outcome-oriented group. We have chairs, vice chairs, and members of the CEB who are members of the UNGIS. We have extended it to observer members as well that include new UN entities like ODET. Well, the key mandates of UNGIS include policy coordination, multi-stakeholder engagement like we are doing today, supporting internationally agreed global development goals, monitoring and reporting, and of course UNGIS has been instrumental in supporting the Global Digital Compact and making sure that we deliver UNGIS inputs into the Global Digital Compact. So without any further delay I would like to invite of course our Chair UNESCO and our Vice-Chairs UNCTAD, UNDP, I represent ITU to kind of let us know what they are doing, what our vision is and of course aligning it with the GDC without duplications. UNESCO, I’ll pass the floor to you first, over to you, please.


Participant 1: Thank you very much Gitanjali. So just to highlight that indeed the 20-year review of WSIS is a pivotal moment to assess the progress and explore future directions and for us one of the enduring strengths of WSIS framework lies in its technology neutral and principle-based design. The action lines of WSIS and the outcome documents were deliberately crafted to transcend specific technologies and this design philosophy has allowed the WSIS framework to remain relevant across successive waves of digital innovation from the early days of the internet adoption through the rise of mobile phones and social media into today’s transformative era of the AI and at UNESCO as the lead facilitator for six action lines out of ways that up-to date and unique technologies can transform humanity into the best version. And what we mean by that in terms of validity efforts Technology by UNESCO together with the row and Ito Nomura. They launched a referendum by October 20th, and UNESCO’s report for the 20 years of WSIS contributing to the 28th session of the CSCD called for reinforcement of WSIS as a central multi-stakeholder platform that facilitates international cooperation in the digital policy. And for us, WSIS can strengthen its position as a hub for dialogue on emerging technologies, on issues such as misinformation, gender equality, and digital rights. To this end, it’s essential to further expand WSIS’s unique multi-stakeholder engagement with grassroots organizations, with youth, with marginalized communities. Our work is indeed guided by a singular vision that digital transformation must serve the humanity, not the other way around, as it has been highlighted and stated from UNESCO’s report. And we’re also committed to building capacity, training, and education for WSIS, as it has been highlighted and stated from various UN agencies, and in this vein, UNESCO is supporting Member States, for example, with evidence-based policy tools, such as the Internet Universality, RomEx indicators, to readiness assessment methodology for AI governance, just to give a few concrete examples. We’re also committed to working with educators, civil servants, and judicial actors to navigate the complexities of the digital era. like we can therefore grow these technologies further and further in the future, and then where we have been in India or in the U.S., it’s a resource again for employing individuals, 31% of the population. And we are also very grateful to the UN, we also served as the first consultation of the review process, and we are thankful to the co-facilitators of the representatives of Albania and Kenya to the UN for their engagement and cooperation, and of course our key partners of ITU and UNDP, and it is important to point out that UNESCO is a key partner of ITU and UNDP, and UNESCO is also a key partner of ITU and UNDP, and we are very grateful to the UN for this approach and in partnership with our UN colleagues, and our engagement within UNGIS, the UN group on the information society is central to this effort, and as a current chair of the group alongside with vice chairs ITU, UNDP and UNGTAT, UNESCO helps ensure that we maintain peace and harmony after the international agreement was agreed between the two Governments. We keep that in mind, and we hope through the exchanges in some sense to Flagler’s problems, he has brought the opportunities in the new shapes to cylinder open up.


Moderator: Thank you very much. Supervisors Tata, Conference comment please join us on the podium as well. Thank you so much. Thank you so much. which will feed into the WSIS Plus 20 overall review, highlighting the importance of the work that UNESCO is doing to implement the WSIS process and the vision of WSIS. So really, as you mentioned, the United Nations Group on Information Society, UNGIS, is a digital cooperation in action. We have been working together, not only to provide inputs to other UN processes and events happening, but also showing how UN can work together and avoid duplication. Thank you so much. I’d like to pass on to Yuping from UNDP, who is also vice chair of UNGIS. Yuping, over to you.


Yu-Ping Lien: Thank you so much, Gitanjali, and really thank you to colleagues for all being here today and to the stakeholders for spending so much of your time with us. As the United Nations Development Program, we’re very proud to be a co-convener of the WSIS Forum and also to work in partnership with all our sister and brother UN agencies, really thinking about how we deliver directly to communities and countries in the area of digital cooperation. So the United Nations Development Program is the UN’s development wing. We’re present in over 170 countries and territories around the world, and we have been, in many countries, the face of the UN in terms of accompanying a national government through all phases of its development, and particularly in countries where sometimes it is a little bit more of a challenging environment. And the United Nations Development Program has functioned as the right hand of government in implementing public services, supporting institutional operations, and really helping to deliver public services to the citizens and people of that country. So right now, for instance, we are in over 130 countries with supporting on programs or implementing programs on leveraging digital and AI to achieve the sustainable development goals. We have supported over 60 countries, I think, at last count in more specific areas, such as AI and digital assessments, digital capacity building, supporting the rollout of digital public infrastructure, technical advisory support to governments, in really thinking about how digital can be leveraged for a transformation. of their countries in support of development itself. And then we work very closely with our UN agency partners in many specific areas. So with the ITU on global AI skills and capacity building, with UNESCO colleagues on AI landscape and readiness assessments, and really looking at how there are specific areas in which we can really turn global discussions into concrete areas of work. I’ve touched on the various areas. We work at all levels, national, in-country, regional, through our regional bureaus, as well as the global sort of convening areas, as well as in thought leadership around digital public infrastructure, around AI for sustainable development, around sort of thinking about how AI and digital can be leveraged to achieve development with very practical delivery aspects. I want to really emphasize this idea that in some ways, because UNDP has such a broad developmental mandate, we can work across all these sectors to really bring together digital transformation, digital cooperation in a holistic and comprehensive way. So we work, for instance, with the technical expertise of our different colleagues that focus on the particular sectors. We’re bringing into a overall, all of government type developmental approach, whereby it’s not really just looking at one piece in isolation, but trying to bring it all together in a development perspective and a whole of government approach. So in that way, we serve as an integrated in-country, partnering with a lot of our colleagues that bring that level of technical expertise to the support of member states directly in-country itself. I just want to also support the point that’s been made by my other colleagues that in the implementation of the WSIS action lines and the WSIS framework, this coming together of the UN agencies through this type of interagency collaboration has been very powerful. The role of the United Nations Group on Information Society that UNESCO had highlighted and currently chairs has been critical in bringing about this kind of policy coherence, alignment, information sharing, and then the collaboration that can really bring to bear the cooperative strengths, the comparative advantages and expertise of the various UN systems. And then on the final point, I do also note that it is a difficult time for the multilateral system. and the international collaborative spirit that has brought us all together. The fact that WSIS is 20 years old, that we’re coming still together to discuss such aspects as capacity development, and the need to make sure that everyone, every way, including developing countries and the global South, are part of this global conversation, is really important. But that is why we in the WSIS Plus 20 review process need to double down on the idea of delivery of impact, of leveraging existing institutions, interagency mechanisms, and collaborative efforts and partnerships that have worked, that have delivered, and really continue to see how we can further support them at this critical moment. Alain and Dan, I really look forward to hearing from other colleagues and stakeholders.


Moderator: Thank you, Yu-Ping. UNDP is not only a close partner of the WSIS process, but also one of our main voices in New York, along with the New York offices that each one of us have. Thanks a lot for keeping us all updated about what’s going on in New York. We have made several efforts to ensure that the diplomats and the wider stakeholder community in New York is also abreast of what’s happening within the WSIS process, because finally the negotiations are going to take place in New York in December, and we would need each one of you to be advocates in New York for us, to be able to explain the importance of the WSIS process, the multi-stakeholder elements that it has, and also that it’s evolved with the evolution of technology. So the framework of WSIS Action Lines, the UN Framework, and so on and so forth, it all has evolved with the evolution of technology in these 20 years, and it’s not that we were set 20 years back and we are old and we have not evolved. As you can see, we are all standing here in front of you, ensuring that we are agile in digital. cooperation and that we are delivering. So, thank you very much. I have Ms. Liping Zhang from UNCTAD CSTD with us online. Liping, can you hear us? The floor is yours, Liping.


Liping Zhang: Can you hear me? Yes, Liping, please go ahead. Thank you, Gitanjali. Well, it’s a great pleasure to participate in this event at IGF again because we launched the CSTD consultation on WSIS plus 20 at IGF in Kyoto in October 2023. So, we are very happy to participate in this event organized by ITU at the IGF.


Moderator: Liping, we lost you. We don’t hear you anymore. Okay, we do hope we’ll get Liping back. As you all know, member states negotiated the ECOSOC resolution at the annual CSTD and one of the main paragraphs which were approved was that it recognizes the importance of integrating the implementation of GDC commitments into the WSIS architecture in order to avoid duplication and ensure a cohesive and consistent approach to digital cooperation. So, this really shows us the way member states are thinking currently of including the GDC objectives into the WSIS architecture. Liping, are you back? I just referred to the resolution that was adopted. Okay, while we get Liping back, maybe we could move to Maghtar because we are also working with the UN regional commissions that have a mandate to implement WSIS.


Participant 3: governance of cyber security with human rights and security european countryment meeting, these are the two events that start at the regional level. Mack fist he ask Mack privy TA , we are meeting we are very good progress in several areas challenges. meeting was adoption of the declaration Just, we are a large to make sure we align and integrate all this what is important we requested and framework in the wishes for the next outcome of this meeting. And also, we have adopted a declaration on African Internet Governance Forum in Tanzania end of May, and also, we have adopted a declaration on African Internet Governance Forum. This declaration call upon the continuation of IGF also for ten years to align with the request we have done in Cotonou, and also to align with the interpretation by the receipt authorities. And the organization need to come up and work on ways to avoid any duplication, with this framework. and the challenges faced by African countries, such as this issue of connectivity. It’s the issue of electricity also should be included in the next phase of which is because it’s something very important. Issue of data governance, of course, is covered in the Global Digital Compact, and AI also as a key issue. And also, the inclusion of people with disability on the general process of the digital economy in the world. Also, as the youngest review, we have several activities with our sister agency. I can highlight some. We already developed. We launched two weeks ago. We’ve entered the report on technology and innovation. It went very well. We are working also closely with ITU to develop this Africa Digital Gap to be ready by September. It is requested by the sector general to work closely with ITU to develop this Africa Digital Gap. Also, ICA has developed a taxation for the ICT sector model. This taxation show how, when we optimize the taxation on the ICT sector, we can increase the GDP, the job creation, as well as connectivity. And we agree with ITU to work together to expand this taxation report and taxation calculator in other regions across the world. Also, we work closely with ITU on the digital public infrastructure, also for Africa. We also organized with UNESCO and ITU, also the SEAS Technology Innovation Forum for Africa, held in Uganda in April. It went very well. And a lot of discussion was around how we can promote innovation using AI across the continent in line with the implementation of the five objectives of the GDC. On digital ID, also, we work. We have a good example with UNDP in UNICEF in Malawi to develop their strategy and their project on digital ID. Also on data governance, we are working closely with some UN agencies in four countries, namely DRC, Mozambique, Tanzania and Burundi to support them to develop their national data governance strategy as well as to build their capacity on data governance. In conclusion, I think we need to work all together and more efficiently given this budget issue and UNICEF and IT work closely and UNCCAD and UNDP and I think we can replicate this in several countries with other organizations and in conclusion, for Africa, the message is very clear. We request the expansion of WSIS for the next 10 years in Africa and the IGF also for the next 10 years and of course to avoid any duplication with global digital compact, we should put in place a mechanism for integration of this framework as well as a mechanism for evaluation and monitoring for WSIS, for IGF to measure the progress and where we can correct it because we have a target in WSIS but we don’t have it in the Internet Governance Forum as well as we need also to institutionalize Internet Governance Forum. It is a summary of key activity undertaken in Africa under the committee. Thank you.


Moderator: Thank you, Magtar and thank you for emphasizing on the important role that regional commissions are playing at the regional level to implement WSIS. I do have Sebastian. Sebastian Roviria on our list from UN ECLAC, the Latin American Commission. Colleagues, the production company, could we know if Sebastian is online? Yes, I can see him on the screen. Sebastian, it’s really late for you. Over to you.


Sebastian Rovira: Thank you very much, Ms. Jethahali. Nice to be here. Actually, it’s a pleasure to share. I’m also trying to be online here with you. Just to bring some issue related to what we have been doing in ECLAC related to digital transformation in the last, in the last, maybe in the last year. As you know, ECLAC has been indeed very at the forefront of design and analytical to support government in navigating the digital transformation. So we have been put a strong emphasis on evidence-based policy. And one of our flagship initiative in this is how we call simulator of productive digital transformation for support. The, actually the design and implementation of new tools to support digital transformation in the region. As you know, this is a tool very, actually is grounding the concept of digital complexity. Something, this is something new for the region. And actually we’re trying to seek to understand the capacity of a country that, or sector also, to be integrate and absorb advanced digital technologies based on its productive and technology capacity. This is important also to our digital agenda to implement the ECLAC agenda. This is a regional project that we have been working in the last 20 years. I’m very in line with the WSIS process. And these tools are actually, we’re trying to actually support the identification of digital pathway for different sectors of the economy. I also try to quantify the. distance to be more complex in the digital technology. I also design target intervention by matching technology demand with the institutional and productive readiness of the territory. So in this, in the context of the WSIS plus 20 review process, this approach trying to contribute to concretely to several dimension of the digital development agenda. This is important also to scale up the cell for and require not just technical collaboration but also political alignment and resources mobilization. You know, the WSIS plus 20 provide a unique opportunity to formalize these kind of synergies and embedded digital complexity approach into the global digital cooperation architecture and particularly in the context of the GDC and the SDG implementation. So I think this is very important also to try to identify new tools and also how we can collaborate in this process since, you know, the digital transformation actually accelerate is important also to trying to identify new ways on how we can support and also collaborate in this process. Up to you, Jitha, how are you?


Moderator: Thank you, Sebastian. And thank you for being the vice chair of UNGES this year bringing the regional perspectives and ECLAC is one of the regional commissions that organizes the ministerial conference on information society in Latin America and the Caribbean region and covers WSIS through that ministerial session as well. So this year it was, last year it was held in Chile. So thank you so much.


Sebastian Rovira: Absolutely, absolutely. Now we have, you know, we have this new agenda. This is the agenda for these two years, the ECLAC 2026. It was approved last year in 2024 with some key elements in the agenda. You know, the agenda is organized and actually. We have three main pillars, one related to productive development, another with well-being, and another with the transformation of the state. And there are other axes that are more transversal to the agenda. One is related to the meaningful connectivity and digital infrastructure. This is something that really important that we have identified. This is not enough, obviously, to count on you have the infrastructure of being connected. It’s also other elements that must be important also to be assured that you are using the connectivity in a proper way, and you can also appropriate from the value that this technology could bring to the people. Another one is related to the governance of digital security. And this is every day is being more and more important, particularly related to the advance of artificial intelligence. How you are governance, the data that is generated in this space, and also the IGF count and it started to be much, much more important. And the last one is related to this innovation and this emerging new technology. The old artificial intelligence and these other aspects, how you are using these to support the sustainable development. The agenda, as Gitanjali says, is organized on these three main pillars and these three main axes, and the idea is how you use this also to accelerate the transformation, digital transformation, but this acceleration must be inclusive and that the same way allowed countries and the region to transform their productive process and also the inclusion process. We also, for sure, have new instrument to implement this agenda. One, as was saying, is this related to this simulator. This is part of the digital transformation lab. Another one, and I think this is very important, is an observatory on digital development. because, you know, and this is obvious for developing regions, you need data, you need, you must do evidence-based policies and every day is being more and more important. Another one is related to this, the necessary and importance of, you know, support capacity building and we have some digital schools, digital formation school for the Latin America and for the Caribbean. And the last one is related to these working groups that we have in the framework of the ELAG process, one related to digital economy, another artificial intelligence, another meaningful connectivity, data governance, and the Caribbean are some of the instruments that we’re working in the process of this ELAG because it’s not just a political agenda, it’s also an implementing agenda that’s trying to support the countries in this digital formation process.


Moderator: Thank you so much, Sebastian, for joining us virtually, even though you couldn’t be here with us physically this time, and we really appreciate the implementation of the WSIS process at the regional level and you also won the WSIS prize one year, I recall, and really like the Latin American region is much more engaged in the WSIS process, thanks to the advocacy that ELAG has been doing. Thank you so much. I can, there’s light, but I do see Mr. Slater from UNIDO also there. Slater, if you’d like to join us here, that would be great. I would now like to pass on the floor to Martin from United Nations University. Martin, tell us what’s happening in that world.


Morten Langfeldt Dahlback Rapler: Thank you. It’s not a short answer, but I’ll try and highlight a few things. So, first of all, UNU has long recognized the potential of technology and it’s been a core focus across all our work since the late, early 1990s. This is anything from water management and natural resource management to peacekeeping and the transformation of society or the public sector at large. Naturally, we’ve been supporting the WSIS process from the onset and we generally link this to the SDGs and the GDC. So for instance, we contribute to the WSIS process, we participate in the WSIS forum every year, we bring our partners there, but we also contribute to the formulation of the pact of the future, and we facilitated the data deep dive consultation for the GDC to ensure that it’s aligned, and in fact we worked with Makta and his team to do that on AI and data governance in the African context. Today, we actively collaborate with all UN agencies essentially, and we also work with regional and national partners. Uniquely, UNU is not funded by the UN general budget. It is funded directly by our national partners, our regional partners. So we are instead funded by member states who relies on us or entrust us with independent research, capacity building, policy advice and assessment, and of course we bring the WSIS objectives and other objectives from the UN system into that work. Today we are about a thousand experts, not officials, but experts working in 19 different locations across 14 countries. We do that in different ways. We usually do that in local collaboration with research entities, so we’re usually co-hosted with a university or research organization and enjoys the support from national or local or regional government in terms of our financing. We are also deeply embedded into the UN digital cooperation architecture and contribute to a various number of forums. So our rector… is a member on the UN Secretary’s General Scientific Advisory Board. Our UN office is working with the Office for ICT on this year’s Digital Technologies Report. We are also leading research support for the Advisory Board for Artificial Intelligence. Together with ITU, we are developing the AI for Good flagship project. We are also working on AI in cities with U4SSE, which is another ITU initiative, but targeting local authorities and AI in the city context. We ran a series of webinars with the International Social Security Association on all things AI in social security protection and universal coverage, and that culminated on a report with policy recommendations for social security agencies last year. We are also part of the Working Group on Data Governance from the SCDD. We are actually in the Working Group on Digital Inclusion for the Islamic Development Bank, and on Sunday, we are launching a five-day virtual training on that with their member states and partners from there. We work closely with the UNDP DPI Safeguards Group. We are part of the Open Source Convex project that is being launched quite recently by the Office of the ICT, together with RISE, the Swedish Research Institute. In the regional context, we have set up a number of member state networks on digital transformation. We already launched one a few years ago for West Africa together with UNECA and also the African Union and UNDESA. That led to the launch of a South and East African Governance Network last year in South Africa. And Tuesday, we launched a Central Asian Governance Network sponsored by the government of Turkmenistan. Lastly, we also run a number of different type of online networks, so again local government, service online is a network that was also presented earlier today by the Tunisians together with UNDESA, we do the same with innovation in health led by technology in the hospice and we run a number of conference series bringing both practitioners, civil society and decision makers and researchers together, so the next one is in October, the AI conference series in Macau and our ISCOF conference hosted by the government of Nigeria in Abuja in November. So essentially we support WSIS and the process and the objectives throughout our work, again we’re focusing on independent evidence based insights, particularly tailoring the implementation on some of our objectives in the member states and always try and link that from governance to practical implementation.


Moderator: Thank you Martin for that really comprehensive response and in fact I recall we’ve been working with UNU, not only your eGov centre but also there’s a centre in Macau that has been working with us since really the inception of WSIS and we’ve always been exploring the academic training angles, not only that I think we’ve been in joint sessions with countries to also explore the implementation of WSIS action lines on the ground, so thank you so much for this excellent collaboration that we have here for WSIS. I’ll move on to Richard from WIPO, who’s not only our good neighbour in Geneva but has been working very closely with us on various issues, Richard over to you.


Richard Gooch: Absolutely, thank you very much Geetanjali for inviting me and of course we’re confirming that as a good neighbour from across the street we always work very closely with WIPO. our good friends from ITU. So just to say that as the UN Agency for Intellectual Property, WIPO serves, of course, these world’s innovators and creators. We do this through our international IP registries that help them transform the ideas from across borders or by setting international IP standards and norms. So this includes, of course, the two new international multilateral treaties that have been concluded last year. One of the more exciting also work of our organization is our ability to track intellectual property activity across the world. And our databases include among others, well, 120 million patent documents, 17 million designs, 68 million trademarks. And all of this data is available for anyone to use and it is powered by different AI tools that help search and translate it to different languages. So examples, what you can do with this IP information. So you can find among others that over the past 20 years, patent applications have grown the fastest in computer technologies and digital communication with an average annual growth of around 8%. Between 2010 and 2024, the number of patent grants for AI technology was up over 3,000% and only between 21 and 22, it was 60%. And this is just a snippet of all the information you can find within this IP information. This is all the findings, the insights you can get from this IP information. This is just a snippet of it. Through our patent information insight reports, we leverage all this patent information to explore a number of technologies such as AI, of course, but also assistive technologies, transportation or health and safety tech. And on many of these, we are working very closely with a lot of our UN family partners. as well as beyond. Perhaps to also say that equally important is that all this information is also used by our innovators and creators, because by accessing this data, accessing this information, then they can avoid duplication of R&D, build on existing knowledge and improving inventions, assess potentiability of their inventions, identify licensing and collaboration opportunities, and many more. So at WIPO, we really want to make sure that all innovators from across the world have access to this knowledge gathered in these databases, and that they know how to use it. So this is why we continue to expand our technology and innovation support centers in short disks. Typically, they are located in the patent office, universities, and science and tech parks in developing and least developed countries, and they enable the researchers and inventors to get access and to use the technological information gathered in this database, and a score of scientific and technical publications which is linked to it. In recent years, also, the disks have been starting developing additional innovation support services, such as technology transfer, IP management, or commercialization. And since disks were launched back in 2009, 93 countries have established these networks, helping innovators develop new innovations across local communities. This is just one of the wide examples that I could be giving. We have submitted, of course, our VCs plus 20 review, where you can find all our work on digital and development. But being here, also, just to mention that being here in IGF and hearing all the discussions, including on AI, let me just mention that we also have our WIPO conversation, which is a forum intended to provide everyone with a leading global setting to discuss all the impacts of frontier technologies and all the many different IP rights, and to also bridge that information gap. What is also important is that following the different versions of the conversations, we provide a range of tools and on-the-ground projects, such as the AI and IP Policy Toolkit, or the upcoming AI and Infrastructure Interchange. And I know the time is always running short, and I’m always looking at the clock and the moderator, but I just want to finish by saying that WIPO is always looking forward to supporting all our stakeholders and countries in ensuring that every innovation creator can thrive. And we love doing this together with our UN family and our partners from across the world. Thank you very much. Thank you.


Moderator: Thank you very much, Richard. I’ll move on to Jason Slater from UNIDO. Jason, I recall once the SDGs were adopted in 2015, UNIDO got together with us and we wanted to highlight resilient infrastructure, the SDG 9. And we had a great partnership within the WSIS process. There were also several high-level dialogues in 2017, and so on and so forth. So really, UNIDO plays a very important role in terms of the resilient infrastructure part. Not only that, but much more. We’re also partnering for the youth track this year. UNIDO is bringing some young people to the WSIS high-level events, so please do join us to see the spirit of the youngsters and what they want us to do beyond 2025. Over to you, Jason.


Jason Slater: Thank you very much. And thank you for putting me on the spot. So I don’t have any prepared speaking notes whatsoever. But, yeah, no, I mean, understanding what the topic is, which is key around collaboration. As you pointed out, we’ve been working together with yourselves, ITU now for a number of years, with my colleague next door now in FAO. How are we collaborating on agriculture, supply chains, et cetera, looking at certain value chains such as coffee and what have you. But just to take a slight step back. You know, we hear about the spirit of cooperation, we hear about what WSIS has been doing. As you mentioned, we’ve been collaborating now in a number of years with WSIS on some of the action lines. Last year we had the adoption of the Global Digital Compact. UNIDO has been appointed as one of the co-chairs, along with our colleagues from UNTAD, on inclusive sustainable digital economy. And what we are doing, and we’re using forums such as WSIS and AI for Good, is how can we have a call for solutions? From UNIDO’s perspective, we’re really looking at how can we build up this multi-stakeholder approach through the private sector, through academia, through think tanks, to ultimately identify solutions that can solve problems of our member states. And that, of course, is not something that’s unique to us. A number of us are doing this on the panel, and I’m sure we’re all working with similar ecosystems, and that’s something that I think we can come together more and more and amplify, because if we do this collectively, with each of our unique mandates and specializations, we will have a much, much greater impact. So just, if I may just touch on a few specific areas that we are now prioritizing. So in my role as Chief AI Innovation Digital Officer, providing services to our 172 member states, we realized that we are having to focus on areas that, when we talk about AI, that’s perhaps not as what’s so commonly known at the moment, when we think about Gen AI and what’s going on around Copilot, Gemini, GPT, et cetera, but really about what can you do when you’re leveraging AI in, for example, smart manufacturing? How can you help a small-medium enterprise who’s got a relatively old-aging production line and inject it with AI-powered chips that can make their production line more energy-efficient? etc. So, this is an area that we’ve seen that we want to start to support even more, so we’re establishing a number of centres of excellences, for example in Addis in Ethiopia in collaboration with the Chinese government, in Morocco, in Tunisia. We have a number of others that will be coming up in Latin America in the coming months, in Cuba, in Venezuela and what have you, and really to see how can we build up this partnership-based approach to bring those solutions from technology providers, from industry, etc. so that our member states can ultimately benefit from this. Last but not least, I would just like to also mention that in this space of innovation, how can we also harness what’s going on around start-ups and innovators. We have a programme that’s referred to ScaleX, which is basically Free Fold. It has an accelerator programme. Actually, again, we did this in collaboration with FAO last year. We had an innovation challenge. We had wonderful people who actually won the award that developed an AI chip that could smell food loss, which we’re now actually looking to deploy in some of our projects. So, in addition to this, we have the innovation side as well, which is a collaboration between ourselves as UN agencies with the corporate sector and, in particular, with fund managers as well, so that we can ultimately support and help start-ups scale their solutions and become investable. With that, I will just take a minute to pause, knowing about the time, and say thank you very much for inviting me to this stage and looking forward to continuing with our collaboration and see you in a couple of weeks.


Moderator: Thank you, Jason, and just for those of you who will join the high-level event, as part of the youth track, we will have a youth party. It’s a youth networking event, so all of us are invited. We’re all young at heart, so you’re all invited to the ITU Maubriant building on the 7th for the youth party. So, Jason, you mentioned the hackathons and the smart challenges. These are really great. But one thing we really need to look at is also the incubation of this good work. You did touch upon it, like how do we incubate these startups? So that’s something we would really like to explore more with FAO and UNIDO. And while we are talking about FAO, Deyan, congratulations, you’re a WSIS champion this year. WSIS prizes champion. Can you tell us more about the project and the work that you’re doing?


Dejan Jakovljevic: Yes, of course. Before I mentioned the project, first of all, thank you for inviting me also to the stage. So Food and Agriculture Organization, our focus is to basically end hunger. And the way we approach that is by looking at having better production, better nutrition, better environment and better life. And to achieve that, we know that technology or the digital opportunities offer us enablement, but also acceleration of the urgently needed transformation of agri-food systems. So how do we do that? We basically don’t look only at one sector and, for example, only to improve production, but we also look into improving production and opportunities to transform. And I think this is maybe common, what I’m hearing also with our colleagues from our agencies. The opportunities we see and we need to take advantage of require transformation, not simply doing the same thing, but maybe more efficient, but actually transformation. And this is some of the projects that you’re mentioning as well from UNIDO. If we look at FAO contribution, what we focus on is producing the digital public goods. We are also contributing to digital public infrastructures. We also provide advisories and enablement mechanisms for countries to transform agri-food system sector. So, digitalization on different elements. But also, if I look around just this panel, most of the enabling elements is actually among us. So, this is where we see a huge benefit of UNGIS and this is also GDC and other instruments that we use for the enabling elements. I’ll just give a few examples. We do know we still need to work on connectivity, so we heavily rely on broadband commission and work of ITU on that and we really appreciate all the efforts. We also know we need to step up the educational elements and we have UNESCO here as well. So, we have jointly all together, how to say, the mandates and the instruments to move forward. One of the other areas that is very important to mention is that we need to work cross-sectorial. FAO cannot cover all the sectors, our mandate is clear, so we do depend on others and we see that as opportunity, so we will continue to work in this way. And yes, I think this is our second or third WSIS Champion Award. So, that particular project is focusing on avoiding food loss, in fact, and this is one area that is not really fully explored and a lot of food is being wasted even before harvest and before it gets to the table. So, this is one of the examples. But some of the major capabilities we provide are for stakeholders, so for targeted interventions and for investment cases. So, this is something that we do every October at the World Food Day, at the 80th birthday also of FAO this year. And also, we provide digital public goods to the farmers. So knowledge products in hand, or what we refer to as extension services. So again, looking at the clock, thank you very much, but I’ll be around if any questions come up. Thank you.


Moderator: Thank you, Dejan. A colleague from UNSCAP messaged me that he’s also online. So production team, if you can get COP, the floor, it’s 8.30 in Thailand. So COP, thank you so much for being there. COP, can you hear us? The floor is yours, COP. This is… Hi, Kirtanjali. Please go ahead, COP. Hi.


Participant 4: Well, first of all, thank you so much, Kirtanjali, for inviting ESCAP. I’m pleased to be part of the conversation this evening. ESCAP looks forward to working closely with ITU, and we have done in the past. Through our regional programs, we have worked together with ITU on the regional review of WSIS. And of course, every year, we hold the Asia-Pacific Information Superhighway Steering Committee, of which we work together with ITU to bring in the champions in the Asia-Pacific region to share their projects with other Asia-Pacific countries and try to promote best practices and lessons learned from other countries. So we look forward to doing that this year as well. I think the next API Steering Committee meeting is planned for November of this year. So we look forward to working together with ITU on that and also other agencies. As you may know, ESCAP continues to work with member states in the region to promote regional cooperation on connectivity. through capacity building and also policy advisory on digital transformation. We look forward to working together with other agencies through UNKISS to promote digital cooperation and transformation in the region. So thank you so much for the opportunity to contribute to today’s conversation. Thank you.


Moderator: Thank you, Siope, and thanks for being with us. It’s so late in Thailand, so thank you so much. With that, we’d like to open the floor. I do see some of you who were raising your hand earlier. Was it – guys, this is your chance to raise your – oh, ma’am, please introduce yourself and please take the floor.


Audience: Good afternoon, everyone. My name is Tsolofelo Mugoni. I’m Internet Governance Coordinator from South Africa. Firstly, let me start by saying it is very pleasing to see a wide range of UN agencies take the floor and talk to us about the work that they do. So thank you to the facilitator and the coordinator for coordinating this session. At South Africa, we recognize and commend the work of UNJUS, particularly in driving digital cooperation within UN systems. We particularly acknowledge and commend the work of UN agencies such as UNEKA, which have played a critical role in supporting developing countries, especially across Africa, to access emerging technologies. By promoting technology transfer and facilitating the integration of ICTs, international strategies, UNEKA has helped ensure that digital transformation contributes meaningfully to inclusive growth. will be as good as next, and the authorities will need to set the tone to go from a day to a month. But the table assessment of the conditions we can hold will be as good and brackish and difficult as next, and the caps need to prepare to rent by six months. Thank you.


Moderator: Thank you very much. We look forward to continuing working with you. Thank you so much. Yes, ma’am, please go ahead.


Audience: Thank you so much for this very interesting and fruitful discussion. I’m Ashling Lynch-Kelly from Foundation The London Story. We’re an Indian diaspora-led human and digital rights organization. So we recently commissioned the first ever baseline study on the challenges and possible solutions for accessing digital health in India. While digital health is undoubtedly democratizing access to healthcare in India, we know that much of the population in India is marginalized, and that people in marginalized communities as well as those who live in rural areas without adequate internet access or internet infrastructure remain largely excluded, and thus unable to access good-quality healthcare. Given these persistent barriers and the significant potential of digital health to advance SDG 3 and SDG 10 in India, we’d be interested to know if and how UNDP and UNGIS other members are working with stakeholders to ensure that the world’s largest democracy can fully benefit from the transformative power of digital healthcare, and work to ensure that access to healthcare in India expands to become fully inclusive, high-quality, and accessible for all. Thank you very much.


Moderator: Thank you very much.


Yu-Ping Lien: Thank you for the question. So we actually have quite an extensive UNDP country office in India, which is working to implement a variety of programs, including, I believe, on digital health. But the overall approach that we take to digital health is really founded on a digital public infrastructure approach, which emphasizes interoperable, inclusive, rights-based approaches to digital transformation. So as part of that, we actually worked very closely with the India G20 presidency two years ago around this particularly groundbreaking approach to digital transformation and really thinking about how digital public infrastructure and really embedding this notion of a rights-based, inclusive, people-centered approach to digital public infrastructure should be part of that conversation. So we’re looking forward to actually continuing the work with the Indian government. We actually had the Indian government and the additional secretary, Abhishek, who’s here at the Internet Governance Forum on the panel just, I think it was two days ago, around AI implementation at the country level, where he reiterated this particular approach and outlined a little bit around the examples of the use cases where he also highlighted digital health. This will continue to be an area of collaboration between UNDP, particularly to our country offices, emphasizing the need for such a rights-based, inclusive approach. And we also welcome this kind of stakeholder input. So feel free to reach out to us if you have any specific suggestions of how we could improve this type of collaboration, any messages that we should continue to press forward as really a global thought leader in the area of digital public infrastructure, especially as we look towards next year’s AI Action Summit that will be hosted by India New Delhi. UNDP is actually working on making sure that this intersection between AI and digital public infrastructure, always grounded in UNDP’s approach on being rights-based, inclusive, and people-centered, will take root.


Moderator: Thank you very much, Yuping. We are also working with WHO on various standards. ITU is working on various standards, and you can get in touch with us also for further details. Can we try to bring in Liping Zhang from UNCTAD, please? She is there, and she would like to just very quickly finish her intervention. Liping Zhang. Liping, can you hear us? I can hear you. Please go ahead. Please go ahead.


Liping Zhang: Well, given the time constraint, I’ll be very brief. Basically, I want to inform you that the CSD has completed its work on their WISC plus 20 review, which will be reported to the General Assembly through ECOSOCO and the outcome of the discussions at the initiation of CSD in April this year was reflected in the WSIS resolution. The resolution will be approved by ECOSOCO in July at its management segment and then will be submitted to the General Assembly as kind of inputs to its review to be held in December. And in that resolution, the only contribution to the WSIS was highly appreciated and was recognized. In particular, it also places expectation on ONGIS to play an important role in the after 20 years review of the WSIS, which is basically to have a bigger role as an outcome of the review at the General Assembly. That’s what the CSD resolution has recommended. The ONGIS should integrate the GDC into its action lines, and it should also play a bigger role in developing kind of implementation mapping relating to the GDC. The overall purpose is basically to align the WSIS with the Sustainable Development Goals and the GDC implementation. So, well, I don’t have anything else to add because of the time constraint. It’s already after 3.45. If you have any questions, I’m ready to answer.


Moderator: Thank you very much, Liping. And we thank all UN agencies who have joined us here today. We’d like to invite you for a group photograph in the front. you can join us, please. And thank you to the audience for your wonderful questions and participation. Thank you. Well, did I cut? Thank you. Thank you.


P

Participant 1

Speech speed

152 words per minute

Speech length

567 words

Speech time

223 seconds

WSIS framework’s technology-neutral design has allowed it to remain relevant across digital innovation waves

Explanation

The WSIS framework was deliberately crafted with a technology-neutral and principle-based design that transcends specific technologies. This approach has enabled the framework to stay relevant through successive waves of digital innovation, from early internet adoption through mobile phones and social media to today’s AI era.


Evidence

The action lines of WSIS and outcome documents were designed to transcend specific technologies, allowing relevance from early internet days through mobile phones, social media, and into today’s AI era. UNESCO serves as lead facilitator for six action lines.


Major discussion point

WSIS Framework Evolution and Future Direction


Topics

Development | Legal and regulatory


Agreed with

– Moderator
– Participant 3

Agreed on

WSIS framework needs to continue evolving while maintaining its foundational principles


P

Participant 3

Speech speed

131 words per minute

Speech length

660 words

Speech time

302 seconds

WSIS should be expanded for the next 10 years with mechanisms to avoid duplication with Global Digital Compact

Explanation

Africa requests the continuation and expansion of WSIS for the next decade, along with the Internet Governance Forum. This expansion should include proper integration mechanisms to prevent duplication with the Global Digital Compact framework.


Evidence

Declaration adopted at African Internet Governance Forum in Tanzania calls for continuation of IGF for ten years. Need for integration framework and mechanisms to avoid duplication between WSIS and GDC.


Major discussion point

WSIS Framework Evolution and Future Direction


Topics

Development | Legal and regulatory


Agreed with

– Moderator
– Participant 1

Agreed on

WSIS framework needs to continue evolving while maintaining its foundational principles


Disagreed with

– Other UN agencies

Disagreed on

Timeline and scope of WSIS expansion


Need to institutionalize Internet Governance Forum and establish evaluation mechanisms for measuring progress

Explanation

There is a need to formalize the Internet Governance Forum structure and create systematic evaluation and monitoring mechanisms for both WSIS and IGF. This would help measure progress and make corrections where needed, as WSIS has targets but IGF currently lacks them.


Evidence

WSIS has targets but Internet Governance Forum lacks evaluation mechanisms. Need for institutionalization and monitoring systems to measure progress and make corrections.


Major discussion point

WSIS Framework Evolution and Future Direction


Topics

Development | Legal and regulatory


UN agencies should work together more efficiently given budget constraints and avoid duplication

Explanation

Given budget limitations, UN agencies need to collaborate more effectively and avoid duplicating efforts. The speaker emphasizes the importance of working together across agencies like UNECA, ITU, UNCTAD, and UNDP to maximize impact.


Evidence

Examples of collaboration between UNECA, ITU, UNCTAD, and UNDP. Mention of budget constraints requiring more efficient cooperation.


Major discussion point

UN Inter-Agency Collaboration and UNGIS Role


Topics

Development


Agreed with

– Moderator
– Yu-Ping Lien
– Jason Slater

Agreed on

Need for UN inter-agency collaboration and coordination to avoid duplication


M

Moderator

Speech speed

140 words per minute

Speech length

1718 words

Speech time

735 seconds

WSIS Plus 20 review provides opportunity to assess progress and explore future directions while maintaining multi-stakeholder approach

Explanation

The 20-year review of WSIS represents a pivotal moment to evaluate achievements and chart future directions. The review should preserve WSIS’s unique multi-stakeholder engagement model while adapting to new technological developments.


Evidence

WSIS is a UN process with UNGA and ECOSOC resolutions. UNGIS plays pivotal role in advancing WSIS mandates. Framework has evolved with technology over 20 years.


Major discussion point

WSIS Framework Evolution and Future Direction


Topics

Development | Legal and regulatory


Agreed with

– Participant 1
– Participant 3

Agreed on

WSIS framework needs to continue evolving while maintaining its foundational principles


UNGIS serves as effective coordination mechanism bringing UN agencies together to drive digital transformation

Explanation

The United Nations Group on the Information Society was created by the chief executive board to ensure coordinated UN system work on digital transformation and sustainable development. It operates as an outcome-oriented group with chairs, vice chairs, and extended observer members.


Evidence

UNGIS created by chief executive board with chairs, vice chairs, and CEB members. Extended to observer members including new UN entities like ODET. Key mandates include policy coordination and multi-stakeholder engagement.


Major discussion point

UN Inter-Agency Collaboration and UNGIS Role


Topics

Development


Agreed with

– Yu-Ping Lien
– Participant 3
– Jason Slater

Agreed on

Need for UN inter-agency collaboration and coordination to avoid duplication


Member states recognize importance of integrating GDC commitments into WSIS architecture to avoid duplication

Explanation

Member states have formally acknowledged through ECOSOC resolution the need to integrate Global Digital Compact commitments into the existing WSIS framework. This integration aims to ensure a cohesive and consistent approach to digital cooperation without duplicating efforts.


Evidence

ECOSOC resolution adopted at annual CSTD recognizes importance of integrating GDC commitments into WSIS architecture to avoid duplication and ensure cohesive approach.


Major discussion point

Global Digital Compact Integration


Topics

Development | Legal and regulatory


Agreed with

– Liping Zhang
– Jason Slater

Agreed on

Integration of Global Digital Compact into WSIS framework to ensure coherence


Y

Yu-Ping Lien

Speech speed

185 words per minute

Speech length

1071 words

Speech time

347 seconds

Inter-agency collaboration through UNGIS has been critical for policy coherence and leveraging comparative advantages

Explanation

The United Nations Group on Information Society has been instrumental in bringing together UN agencies to achieve policy coherence, share information, and leverage the comparative advantages and expertise of various UN systems. This collaboration has been particularly powerful in implementing WSIS action lines.


Evidence

UNGIS brings policy coherence, alignment, information sharing, and collaboration that leverages cooperative strengths and comparative advantages of various UN systems.


Major discussion point

UN Inter-Agency Collaboration and UNGIS Role


Topics

Development


Agreed with

– Moderator
– Participant 3
– Jason Slater

Agreed on

Need for UN inter-agency collaboration and coordination to avoid duplication


UNDP supports over 130 countries with digital and AI programs for sustainable development goals

Explanation

As the UN’s development wing present in over 170 countries, UNDP implements digital and AI programs in over 130 countries to achieve sustainable development goals. The organization supports governments with digital assessments, capacity building, digital public infrastructure, and technical advisory services.


Evidence

UNDP present in over 170 countries and territories. Programs in over 130 countries on leveraging digital and AI for SDGs. Support for over 60 countries in AI and digital assessments, capacity building, and digital public infrastructure.


Major discussion point

Digital Development and Capacity Building


Topics

Development | Infrastructure


Digital public infrastructure approach emphasizes interoperable, inclusive, rights-based digital transformation

Explanation

UNDP advocates for a digital public infrastructure approach that prioritizes interoperability, inclusivity, and rights-based principles in digital transformation initiatives. This approach was particularly emphasized during India’s G20 presidency and continues to guide UNDP’s global work.


Evidence

Collaboration with India G20 presidency on digital public infrastructure approach. Emphasis on rights-based, inclusive, people-centered approach to digital transformation. Upcoming AI Action Summit in New Delhi.


Major discussion point

Sectoral Applications and Digital Public Goods


Topics

Development | Human rights | Infrastructure


Agreed with

– Audience

Agreed on

Importance of inclusive, rights-based approach to digital transformation


L

Liping Zhang

Speech speed

128 words per minute

Speech length

287 words

Speech time

133 seconds

CSTD resolution expects UNGIS to play bigger role in WSIS Plus 20 review and GDC integration

Explanation

The Commission on Science and Technology for Development has completed its WSIS Plus 20 review work, with outcomes reflected in a resolution that places expectations on UNGIS to have an expanded role. The resolution will be submitted to the General Assembly as input for the December review.


Evidence

CSTD completed WSIS plus 20 review work. Resolution to be approved by ECOSOC and submitted to General Assembly. Resolution places expectation on UNGIS for bigger role in review outcome.


Major discussion point

UN Inter-Agency Collaboration and UNGIS Role


Topics

Development | Legal and regulatory


UNGIS should integrate GDC into WSIS action lines and develop implementation mapping

Explanation

According to the CSTD resolution, UNGIS should take responsibility for integrating the Global Digital Compact into existing WSIS action lines and develop comprehensive implementation mapping related to the GDC. The overall purpose is to align WSIS with both the Sustainable Development Goals and GDC implementation.


Evidence

CSTD resolution recommends UNGIS integrate GDC into action lines and develop implementation mapping relating to GDC. Purpose is to align WSIS with SDGs and GDC implementation.


Major discussion point

Global Digital Compact Integration


Topics

Development | Legal and regulatory


Agreed with

– Moderator
– Jason Slater

Agreed on

Integration of Global Digital Compact into WSIS framework to ensure coherence


S

Sebastian Rovira

Speech speed

144 words per minute

Speech length

892 words

Speech time

369 seconds

ECLAC has digital agenda aligned with WSIS process focusing on productive development and digital transformation

Explanation

ECLAC has developed a comprehensive digital agenda organized around three main pillars: productive development, well-being, and transformation of the state. The agenda includes transversal axes covering meaningful connectivity, digital governance and security, and innovation with emerging technologies like AI.


Evidence

ECLAC agenda 2026 approved with three pillars: productive development, well-being, and state transformation. Transversal axes include meaningful connectivity, digital governance and security, and innovation with emerging technologies.


Major discussion point

Regional Implementation and Perspectives


Topics

Development | Infrastructure


Need for evidence-based policies and data availability for developing regions

Explanation

ECLAC emphasizes the critical importance of evidence-based policy making for developing regions, highlighting the need for comprehensive data and analytical tools. The organization has developed instruments like an observatory on digital development and digital formation schools to support this approach.


Evidence

Observatory on digital development established. Digital formation schools for Latin America and Caribbean. Working groups on digital economy, AI, meaningful connectivity, and data governance.


Major discussion point

Digital Development and Capacity Building


Topics

Development


P

Participant 4

Speech speed

118 words per minute

Speech length

208 words

Speech time

104 seconds

Asia-Pacific region promotes regional cooperation through steering committee meetings and best practice sharing

Explanation

ESCAP works closely with ITU through the Asia-Pacific Information Superhighway Steering Committee to bring together regional champions and promote best practices and lessons learned among Asia-Pacific countries. The organization continues to support member states in regional cooperation on connectivity and digital transformation.


Evidence

Asia-Pacific Information Superhighway Steering Committee meetings with ITU. Next meeting planned for November. Regional cooperation on connectivity through capacity building and policy advisory.


Major discussion point

Regional Implementation and Perspectives


Topics

Development | Infrastructure


M

Morten Langfeldt Dahlback Rapler

Speech speed

135 words per minute

Speech length

734 words

Speech time

325 seconds

UNU contributes through independent research, capacity building, and policy advice across multiple locations

Explanation

United Nations University operates as an independent research institution with about 1000 experts working in 19 locations across 14 countries. UNU is funded directly by member states rather than the UN general budget, allowing it to provide independent research, capacity building, and policy advice while supporting WSIS objectives.


Evidence

1000 experts in 19 locations across 14 countries. Funded directly by member states, not UN general budget. Co-hosted with universities and research organizations with local/regional government support.


Major discussion point

Digital Development and Capacity Building


Topics

Development


R

Richard Gooch

Speech speed

155 words per minute

Speech length

734 words

Speech time

283 seconds

WIPO databases contain millions of patent documents with AI patent applications growing 3000% between 2010-2024

Explanation

WIPO maintains extensive intellectual property databases including 120 million patent documents, 17 million designs, and 68 million trademarks, all powered by AI tools for search and translation. Patent applications have grown fastest in computer technologies and digital communication, with AI patent grants increasing dramatically over recent years.


Evidence

120 million patent documents, 17 million designs, 68 million trademarks in databases. Patent applications grew fastest in computer technologies and digital communication at 8% annually. AI patent grants up 3000% between 2010-2024, 60% between 2021-2022.


Major discussion point

Technology Innovation and Intellectual Property


Topics

Legal and regulatory | Development


Technology and Innovation Support Centers help innovators in developing countries access patent information

Explanation

WIPO’s Technology and Innovation Support Centers (TISCs) are located in patent offices, universities, and science parks in developing and least developed countries. These centers enable researchers and inventors to access technological information and are expanding to include additional innovation support services like technology transfer and IP management.


Evidence

93 countries have established TISC networks since 2009. Located in patent offices, universities, and science and tech parks. Expanding services to include technology transfer, IP management, and commercialization.


Major discussion point

Technology Innovation and Intellectual Property


Topics

Development | Legal and regulatory


J

Jason Slater

Speech speed

167 words per minute

Speech length

694 words

Speech time

249 seconds

UNIDO appointed as co-chair for inclusive sustainable digital economy under GDC

Explanation

Following the adoption of the Global Digital Compact, UNIDO has been appointed as co-chair alongside UNCTAD for the inclusive sustainable digital economy component. UNIDO is using forums like WSIS and AI for Good to implement a call for solutions approach involving multi-stakeholder participation.


Evidence

Co-chair appointment with UNCTAD for inclusive sustainable digital economy under GDC. Using WSIS and AI for Good forums for call for solutions approach with private sector, academia, and think tanks.


Major discussion point

Global Digital Compact Integration


Topics

Development | Economic


Agreed with

– Moderator
– Liping Zhang

Agreed on

Integration of Global Digital Compact into WSIS framework to ensure coherence


UNIDO focuses on AI applications in smart manufacturing and establishing centers of excellence

Explanation

UNIDO prioritizes AI applications in smart manufacturing, helping small-medium enterprises integrate AI-powered solutions into aging production lines for improved energy efficiency. The organization is establishing centers of excellence in various regions including Ethiopia, Morocco, Tunisia, and planned centers in Latin America.


Evidence

Centers of excellence in Addis Ethiopia (with Chinese government), Morocco, Tunisia. Planned centers in Cuba, Venezuela. Focus on AI-powered chips for production line efficiency in SMEs.


Major discussion point

Technology Innovation and Intellectual Property


Topics

Development | Economic


Innovation challenges and accelerator programs help scale startup solutions for member states

Explanation

UNIDO operates ScaleX, a three-fold program including an accelerator component that runs innovation challenges in collaboration with other UN agencies. These programs help startups develop solutions for member states and become investable through partnerships with corporate sector and fund managers.


Evidence

ScaleX accelerator programme. Innovation challenge with FAO produced AI chip that could smell food loss. Collaboration with UN agencies, corporate sector, and fund managers to support startup scaling.


Major discussion point

Technology Innovation and Intellectual Property


Topics

Development | Economic


D

Dejan Jakovljevic

Speech speed

135 words per minute

Speech length

521 words

Speech time

230 seconds

FAO focuses on digital transformation of agri-food systems and producing digital public goods

Explanation

FAO approaches ending hunger through digital transformation of agri-food systems, focusing on better production, nutrition, environment, and life. The organization produces digital public goods, contributes to digital public infrastructures, and provides advisory services for countries to transform their agri-food sectors.


Evidence

Focus on better production, nutrition, environment and life. Digital public goods production and digital public infrastructure contributions. WSIS Champion Award for project on avoiding food loss before harvest.


Major discussion point

Sectoral Applications and Digital Public Goods


Topics

Development | Sustainable development


Cross-sectoral collaboration needed as individual agencies cannot cover all sectors

Explanation

FAO acknowledges that no single agency can cover all sectors within their individual mandates, making cross-sectoral collaboration essential. The organization sees this as an opportunity and depends on partnerships with other agencies to achieve comprehensive digital transformation across different sectors.


Evidence

FAO’s clear mandate limitations require dependence on other agencies. Collaboration opportunities across sectors for comprehensive coverage.


Major discussion point

Sectoral Applications and Digital Public Goods


Topics

Development


A

Audience

Speech speed

172 words per minute

Speech length

369 words

Speech time

128 seconds

Digital health democratizes healthcare access but marginalized communities remain excluded

Explanation

While digital health is democratizing access to healthcare in India, marginalized communities and rural populations without adequate internet access or infrastructure remain largely excluded. This creates barriers to accessing good-quality healthcare despite the significant potential of digital health to advance sustainable development goals.


Evidence

Baseline study commissioned on challenges and solutions for accessing digital health in India. Marginalized communities and rural areas lack adequate internet access and infrastructure.


Major discussion point

Sectoral Applications and Digital Public Goods


Topics

Development | Human rights | Infrastructure


Agreed with

– Yu-Ping Lien

Agreed on

Importance of inclusive, rights-based approach to digital transformation


Agreements

Agreement points

Need for UN inter-agency collaboration and coordination to avoid duplication

Speakers

– Moderator
– Yu-Ping Lien
– Participant 3
– Jason Slater

Arguments

UNGIS serves as effective coordination mechanism bringing UN agencies together to drive digital transformation


Inter-agency collaboration through UNGIS has been critical for policy coherence and leveraging comparative advantages


UN agencies should work together more efficiently given budget constraints and avoid duplication


UNIDO appointed as co-chair for inclusive sustainable digital economy under GDC


Summary

Multiple speakers emphasized the critical importance of UN agencies working together through mechanisms like UNGIS to avoid duplication, leverage comparative advantages, and maximize impact despite budget constraints.


Topics

Development


Integration of Global Digital Compact into WSIS framework to ensure coherence

Speakers

– Moderator
– Liping Zhang
– Jason Slater

Arguments

Member states recognize importance of integrating GDC commitments into WSIS architecture to avoid duplication


UNGIS should integrate GDC into WSIS action lines and develop implementation mapping


UNIDO appointed as co-chair for inclusive sustainable digital economy under GDC


Summary

There is strong consensus that the Global Digital Compact should be integrated into the existing WSIS framework rather than creating parallel structures, with UNGIS playing a key coordination role.


Topics

Development | Legal and regulatory


WSIS framework needs to continue evolving while maintaining its foundational principles

Speakers

– Moderator
– Participant 1
– Participant 3

Arguments

WSIS Plus 20 review provides opportunity to assess progress and explore future directions while maintaining multi-stakeholder approach


WSIS framework’s technology-neutral design has allowed it to remain relevant across digital innovation waves


WSIS should be expanded for the next 10 years with mechanisms to avoid duplication with Global Digital Compact


Summary

Speakers agreed that WSIS should continue for another decade, building on its technology-neutral foundation while adapting to new challenges and integrating with newer frameworks like the GDC.


Topics

Development | Legal and regulatory


Importance of inclusive, rights-based approach to digital transformation

Speakers

– Yu-Ping Lien
– Audience

Arguments

Digital public infrastructure approach emphasizes interoperable, inclusive, rights-based digital transformation


Digital health democratizes healthcare access but marginalized communities remain excluded


Summary

Both speakers highlighted the need for digital transformation initiatives to prioritize inclusion and rights-based approaches, ensuring marginalized communities are not left behind.


Topics

Development | Human rights | Infrastructure


Similar viewpoints

Both speakers emphasized the critical role of research, evidence-based policy making, and capacity building in supporting digital transformation, particularly for developing regions.

Speakers

– Sebastian Rovira
– Morten Langfeldt Dahlback Rapler

Arguments

Need for evidence-based policies and data availability for developing regions


UNU contributes through independent research, capacity building, and policy advice across multiple locations


Topics

Development


Both speakers highlighted the importance of collaborative approaches and innovation ecosystems, recognizing that no single agency can address all aspects of digital transformation alone.

Speakers

– Jason Slater
– Dejan Jakovljevic

Arguments

Innovation challenges and accelerator programs help scale startup solutions for member states


Cross-sectoral collaboration needed as individual agencies cannot cover all sectors


Topics

Development


Both regional commission representatives emphasized the importance of regional cooperation and coordination in implementing digital transformation initiatives aligned with global frameworks.

Speakers

– Participant 4
– Sebastian Rovira

Arguments

Asia-Pacific region promotes regional cooperation through steering committee meetings and best practice sharing


ECLAC has digital agenda aligned with WSIS process focusing on productive development and digital transformation


Topics

Development | Infrastructure


Unexpected consensus

Strong support for institutionalizing and formalizing digital cooperation mechanisms

Speakers

– Participant 3
– Liping Zhang
– Moderator

Arguments

Need to institutionalize Internet Governance Forum and establish evaluation mechanisms for measuring progress


CSTD resolution expects UNGIS to play bigger role in WSIS Plus 20 review and GDC integration


UNGIS serves as effective coordination mechanism bringing UN agencies together to drive digital transformation


Explanation

It was unexpected to see such strong consensus on the need for more formal institutional structures and evaluation mechanisms, given that many digital governance discussions often favor flexible, informal approaches. This suggests a maturation of the field toward more structured governance.


Topics

Development | Legal and regulatory


Universal recognition of the need for cross-sectoral and multi-stakeholder approaches

Speakers

– Dejan Jakovljevic
– Jason Slater
– Richard Gooch
– Yu-Ping Lien

Arguments

Cross-sectoral collaboration needed as individual agencies cannot cover all sectors


Innovation challenges and accelerator programs help scale startup solutions for member states


Technology and Innovation Support Centers help innovators in developing countries access patent information


UNDP supports over 130 countries with digital and AI programs for sustainable development goals


Explanation

The unanimous agreement across diverse UN agencies on the necessity of cross-sectoral collaboration was unexpected, as agencies often focus on defending their individual mandates. This consensus suggests a significant shift toward integrated approaches in digital development.


Topics

Development


Overall assessment

Summary

The discussion revealed strong consensus on key structural and operational aspects of digital cooperation, including the need for continued UN inter-agency collaboration through UNGIS, integration of the Global Digital Compact into existing WSIS frameworks, and the importance of inclusive, rights-based approaches to digital transformation.


Consensus level

High level of consensus with significant implications for the future of global digital cooperation. The agreement suggests a mature understanding among UN agencies of the need for coordinated, non-duplicative approaches to digital development. This consensus provides a strong foundation for implementing the WSIS Plus 20 review outcomes and integrating the Global Digital Compact effectively. The unexpected areas of consensus, particularly around institutionalization and cross-sectoral collaboration, indicate a readiness for more structured and integrated approaches to digital governance at the global level.


Differences

Different viewpoints

Timeline and scope of WSIS expansion

Speakers

– Participant 3
– Other UN agencies

Arguments

WSIS should be expanded for the next 10 years with mechanisms to avoid duplication with Global Digital Compact


Various approaches to WSIS Plus 20 review without specific 10-year expansion commitment


Summary

Participant 3 (representing Africa) specifically calls for a 10-year expansion of WSIS, while other speakers discuss the WSIS Plus 20 review process without committing to specific timeline extensions


Topics

Development | Legal and regulatory


Unexpected differences

Institutionalization approach for Internet Governance Forum

Speakers

– Participant 3
– Other speakers

Arguments

Need to institutionalize Internet Governance Forum and establish evaluation mechanisms for measuring progress


Various approaches to IGF continuation without specific institutionalization calls


Explanation

While most speakers discuss IGF as an ongoing process, Participant 3 specifically calls for institutionalizing IGF, which represents a more formal structural change that other speakers don’t explicitly address. This is unexpected as IGF has traditionally operated as a more flexible, multi-stakeholder forum


Topics

Development | Legal and regulatory


Overall assessment

Summary

The discussion shows remarkable consensus among UN agencies on core objectives of digital cooperation, WSIS continuation, and GDC integration, with only minor disagreements on implementation approaches and timelines


Disagreement level

Low level of disagreement with high collaborative spirit. The main differences are tactical rather than strategic, focusing on specific mechanisms, timelines, and institutional arrangements rather than fundamental goals. This suggests strong potential for unified implementation of WSIS Plus 20 outcomes, though some negotiation may be needed on specific procedural and timeline issues raised by regional representatives


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasized the critical role of research, evidence-based policy making, and capacity building in supporting digital transformation, particularly for developing regions.

Speakers

– Sebastian Rovira
– Morten Langfeldt Dahlback Rapler

Arguments

Need for evidence-based policies and data availability for developing regions


UNU contributes through independent research, capacity building, and policy advice across multiple locations


Topics

Development


Both speakers highlighted the importance of collaborative approaches and innovation ecosystems, recognizing that no single agency can address all aspects of digital transformation alone.

Speakers

– Jason Slater
– Dejan Jakovljevic

Arguments

Innovation challenges and accelerator programs help scale startup solutions for member states


Cross-sectoral collaboration needed as individual agencies cannot cover all sectors


Topics

Development


Both regional commission representatives emphasized the importance of regional cooperation and coordination in implementing digital transformation initiatives aligned with global frameworks.

Speakers

– Participant 4
– Sebastian Rovira

Arguments

Asia-Pacific region promotes regional cooperation through steering committee meetings and best practice sharing


ECLAC has digital agenda aligned with WSIS process focusing on productive development and digital transformation


Topics

Development | Infrastructure


Takeaways

Key takeaways

WSIS framework’s technology-neutral design has proven effective over 20 years, remaining relevant across successive waves of digital innovation from early internet to AI era


UNGIS (UN Group on Information Society) serves as an effective coordination mechanism that demonstrates successful inter-agency collaboration and avoids duplication of efforts


Strong consensus exists among member states and UN agencies to integrate Global Digital Compact (GDC) commitments into the WSIS architecture rather than creating parallel processes


Regional implementation through UN regional commissions has been crucial for WSIS success, with each region adapting the framework to local needs and challenges


Digital transformation must be inclusive, rights-based, and people-centered, with particular attention to marginalized communities and developing countries


Cross-sectoral collaboration is essential as no single UN agency can address all aspects of digital development alone


Evidence-based policy making and capacity building remain fundamental requirements for successful digital transformation


Innovation ecosystems involving startups, academia, and private sector partnerships are critical for scaling digital solutions


Resolutions and action items

UNGIS to play a bigger role in WSIS Plus 20 review process and develop implementation mapping for GDC integration


Establish mechanisms to avoid duplication between WSIS framework and Global Digital Compact implementation


Continue regional ministerial conferences and steering committee meetings to maintain regional engagement


Expand WSIS and Internet Governance Forum for next 10 years as requested by African region


Develop evaluation and monitoring mechanisms for WSIS and IGF to measure progress against targets


Institutionalize Internet Governance Forum to ensure continuity


Continue joint capacity building programs and evidence-based policy tool development across UN agencies


Maintain youth engagement through networking events and high-level participation in WSIS processes


Unresolved issues

Specific mechanisms for integrating GDC commitments into WSIS action lines remain to be developed


How to ensure adequate funding and resources for expanded WSIS and IGF mandates given budget constraints


Addressing persistent digital divides, particularly connectivity and electricity challenges in Africa and other developing regions


Ensuring marginalized communities and rural populations can access digital health and other services despite infrastructure barriers


Balancing innovation acceleration with inclusive development to prevent further marginalization of vulnerable populations


Establishing concrete metrics and evaluation frameworks for measuring WSIS implementation progress


Coordinating multiple regional approaches and priorities within a coherent global framework


Suggested compromises

Integrate GDC objectives into existing WSIS architecture rather than creating new parallel structures to avoid duplication


Leverage existing successful inter-agency mechanisms like UNGIS rather than establishing new coordination bodies


Combine global frameworks with regional adaptation to address local challenges while maintaining coherent approach


Balance technology innovation with inclusive development by embedding rights-based approaches in all digital initiatives


Use existing UN agency comparative advantages and expertise through collaborative partnerships rather than expanding individual mandates


Align WSIS continuation with SDG timelines and GDC implementation to create coherent development agenda


Thought provoking comments

Our work is indeed guided by a singular vision that digital transformation must serve the humanity, not the other way around… WSIS can strengthen its position as a hub for dialogue on emerging technologies, on issues such as misinformation, gender equality, and digital rights.

Speaker

Participant 1 (UNESCO)


Reason

This comment reframes the entire discussion by establishing a human-centered philosophy for digital transformation. It challenges the often technology-first approach by explicitly stating that technology should serve humanity rather than the reverse. This philosophical grounding provides a critical lens through which all subsequent technical discussions should be viewed.


Impact

This comment set the foundational tone for the entire session, establishing that despite the technical nature of WSIS processes, the ultimate goal is human welfare. It influenced subsequent speakers to frame their contributions in terms of human impact and inclusive development, rather than purely technical achievements.


I want to really emphasize this idea that in some ways, because UNDP has such a broad developmental mandate, we can work across all these sectors to really bring together digital transformation, digital cooperation in a holistic and comprehensive way… trying to bring it all together in a development perspective and a whole of government approach.

Speaker

Yu-Ping Lien (UNDP)


Reason

This insight highlights the critical importance of breaking down silos in digital cooperation. Rather than treating digital transformation as a separate technical domain, it advocates for integration across all development sectors. This systems thinking approach challenges the traditional compartmentalized approach to UN agency work.


Impact

This comment shifted the discussion from individual agency contributions to collaborative, cross-sectoral approaches. It prompted other speakers to emphasize their partnerships and collaborative efforts, moving the conversation toward more integrated solutions rather than isolated technical interventions.


I do also note that it is a difficult time for the multilateral system. and the international collaborative spirit that has brought us all together… we in the WSIS Plus 20 review process need to double down on the idea of delivery of impact, of leveraging existing institutions, interagency mechanisms, and collaborative efforts and partnerships that have worked, that have delivered.

Speaker

Yu-Ping Lien (UNDP)


Reason

This comment introduces crucial political realism into what could have been a purely technical discussion. It acknowledges the broader geopolitical challenges facing multilateral cooperation while advocating for pragmatic focus on proven mechanisms. This adds urgency and strategic thinking to the conversation.


Impact

This observation brought a sobering reality check to the discussion, prompting speakers to emphasize concrete deliverables and proven partnerships rather than aspirational goals. It influenced the tone to become more focused on practical implementation and measurable outcomes.


We request the expansion of WSIS for the next 10 years in Africa and the IGF also for the next 10 years and of course to avoid any duplication with global digital compact, we should put in place a mechanism for integration of this framework as well as a mechanism for evaluation and monitoring for WSIS, for IGF to measure the progress.

Speaker

Participant 3 (Regional Commission representative)


Reason

This comment introduces critical governance and accountability dimensions that were missing from earlier technical discussions. It challenges the assumption that good intentions automatically lead to good outcomes by demanding concrete monitoring and evaluation mechanisms. The call for integration rather than duplication addresses a key inefficiency in international cooperation.


Impact

This intervention shifted the discussion from what agencies are doing to how effectiveness can be measured and ensured. It introduced the crucial question of institutional architecture and accountability, influencing subsequent speakers to address coordination mechanisms and measurable outcomes.


How can you help a small-medium enterprise who’s got a relatively old-aging production line and inject it with AI-powered chips that can make their production line more energy-efficient?… We have a programme that’s referred to ScaleX, which is basically Free Fold. It has an accelerator programme.

Speaker

Jason Slater (UNIDO)


Reason

This comment grounds the abstract discussion of digital transformation in concrete, practical applications. It moves beyond high-level policy discussions to address the real challenges faced by small businesses in developing countries. The focus on practical AI applications for manufacturing represents a shift from consumer-focused digital discussions to productive sector transformation.


Impact

This intervention brought the discussion down from policy level to implementation reality, prompting other speakers to provide more concrete examples of their work. It demonstrated how digital transformation can address real economic challenges, influencing the conversation toward practical solutions rather than theoretical frameworks.


We basically don’t look only at one sector and, for example, only to improve production, but we also look into improving production and opportunities to transform… The opportunities we see and we need to take advantage of require transformation, not simply doing the same thing, but maybe more efficient, but actually transformation.

Speaker

Dejan Jakovljevic (FAO)


Reason

This comment introduces a crucial distinction between efficiency improvements and fundamental transformation. It challenges incremental thinking by arguing that digital technologies require rethinking entire systems rather than just optimizing existing processes. This systems transformation perspective is critical for addressing complex challenges like food security.


Impact

This insight elevated the discussion from technical implementation to strategic transformation thinking. It influenced other participants to consider how their work contributes to fundamental system changes rather than just incremental improvements, adding depth to the conversation about digital transformation’s potential.


Overall assessment

These key comments fundamentally shaped the discussion by introducing three critical dimensions that elevated it beyond a routine inter-agency coordination meeting. First, they established a human-centered philosophical foundation that grounded all technical discussions in human welfare considerations. Second, they introduced political realism and accountability demands that challenged participants to focus on measurable outcomes and proven mechanisms rather than aspirational goals. Third, they bridged the gap between high-level policy discussions and practical implementation challenges, forcing speakers to provide concrete examples and address real-world constraints. Together, these interventions transformed what could have been a series of agency reports into a substantive dialogue about the future of digital cooperation, emphasizing integration, transformation, and accountability as core principles for the WSIS+20 process.


Follow-up questions

How can we better incubate startups and good work from hackathons and smart challenges?

Speaker

Moderator (Gitanjali)


Explanation

The moderator specifically mentioned this as something they would like to explore more with FAO and UNIDO, indicating a need for better mechanisms to support innovation beyond initial challenges


How can UNDP and UNGIS members work with stakeholders to ensure India can fully benefit from transformative digital healthcare while making it inclusive and accessible for all?

Speaker

Ashling Lynch-Kelly (Foundation The London Story)


Explanation

This question addresses the gap between digital health democratization and the exclusion of marginalized communities in India, seeking specific collaboration approaches


How can we develop better mechanisms for integration of WSIS framework with Global Digital Compact to avoid duplication?

Speaker

Maghtar (UN Regional Commission representative)


Explanation

This addresses the need for practical implementation of the policy directive to integrate GDC commitments into WSIS architecture without creating redundancies


How can we establish mechanisms for evaluation and monitoring for WSIS and IGF to measure progress and make corrections?

Speaker

Maghtar (UN Regional Commission representative)


Explanation

This highlights the need for accountability and progress tracking systems, noting that WSIS has targets but IGF lacks them


How can we institutionalize the Internet Governance Forum?

Speaker

Maghtar (UN Regional Commission representative)


Explanation

This addresses the need for more formal structures and processes within IGF to ensure continuity and effectiveness


How can we scale up digital complexity approaches and tools for supporting digital transformation?

Speaker

Sebastian Rovira (UN ECLAC)


Explanation

This requires not just technical collaboration but also political alignment and resource mobilization to implement new analytical tools across regions


How can we better leverage existing institutions, interagency mechanisms, and collaborative partnerships that have proven effective?

Speaker

Yu-Ping Lien (UNDP)


Explanation

This addresses the need to strengthen and expand successful collaboration models during a difficult time for the multilateral system


How can we ensure diplomats and stakeholder communities in New York are better informed about the WSIS process evolution?

Speaker

Moderator (Gitanjali)


Explanation

This is crucial for the upcoming negotiations in December and requires advocacy efforts to demonstrate that WSIS has evolved with technology over 20 years


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #46 Developing a Secure Rights Respecting Digital Future

Open Forum #46 Developing a Secure Rights Respecting Digital Future

Session at a glance

Summary

This IGF open forum discussion focused on developing a secure, rights-respecting digital future through collaborative governance approaches. The session was chaired by Neil Wilson from the UK’s Foreign Commonwealth and Development Office and featured panelists from government, international organizations, academia, and civil society discussing digital development challenges and solutions.


Alessandra Lustrati outlined the UK’s comprehensive digital development framework, emphasizing three key pillars: digital inclusion (focusing on meaningful connectivity and digital skills), digital responsibility (addressing cybersecurity and online safety), and digital sustainability (considering environmental impacts). She highlighted the Digital Access Programme, a partnership between FCDO and other organizations working across five countries – Brazil, Indonesia, Kenya, Nigeria, and South Africa – which has reached 15 million people in over 5,000 communities.


Samantha O’Riordan from the ITU emphasized that 2.6 billion people remain offline globally, with the majority in Africa and Asia. She stressed the importance of meaningful connectivity that provides safe, satisfying, and productive online experiences at affordable costs. The ITU has been supporting countries through capacity building, establishing computer incident response teams, and developing national cybersecurity strategies.


Leonard Mabele discussed Kenya’s National Digital Master Plan, focusing on innovative spectrum sharing approaches including TV white spaces and Wi-Fi 6E to deliver last-mile connectivity. Professor Luzango Mfupe shared South Africa’s experiences with community networks and spectrum innovation, noting that data costs remain a significant barrier for rural households who must choose between connectivity and basic necessities.


Maria Paz Canales from Global Partners Digital emphasized the need for participatory approaches in digital transformation, ensuring that local communities are involved from the design stage rather than being passive recipients of top-down technological solutions. She stressed the importance of balancing innovation with human rights protection and establishing mechanisms for monitoring and course correction.


The discussion highlighted the critical balance between expanding connectivity and managing associated risks, including cybersecurity threats, technology-facilitated gender-based violence, and digital divides. All panelists agreed that sustainable digital transformation requires multi-stakeholder collaboration, community-centered approaches, and frameworks that prioritize both innovation and rights protection in building an inclusive digital future.


Keypoints

## Major Discussion Points:


– **Digital Divide and Meaningful Connectivity**: The persistent challenge of connecting 2.6 billion people who remain offline globally, with emphasis on moving beyond basic connectivity to “meaningful connectivity” that provides safe, satisfying, and productive online experiences at affordable costs.


– **Multi-stakeholder Approach to Digital Development**: The UK’s Digital Access Programme framework focusing on three pillars – digital inclusion (connectivity and skills), digital responsibility (cybersecurity and online safety), and digital sustainability (environmental impact considerations).


– **Spectrum Innovation and Community Networks**: Technical solutions for last-mile connectivity including TV white spaces, Wi-Fi 6E, private LTE/5G networks, and community-based approaches that start from local needs rather than top-down technology deployment.


– **Balancing Innovation with Risk Management**: The challenge of promoting digital transformation while addressing emerging threats like cybercrime, technology-facilitated gender-based violence, misinformation, and ensuring cybersecurity capacity building, particularly in least developed countries.


– **Human Rights-Centered Digital Transformation**: The importance of inclusive, participatory approaches that involve local communities in designing and implementing digital strategies, ensuring that marginalized groups have meaningful participation and that policies are responsive to local contexts and needs.


## Overall Purpose:


This IGF open forum aimed to explore collaborative solutions for developing a secure, inclusive, and rights-respecting digital future. The session focused on sharing practical experiences and frameworks for digital development, particularly through the UK’s Digital Access Programme partnerships in Brazil, Indonesia, Kenya, Nigeria, and South Africa, while addressing how to balance technological innovation with human rights protection and inclusive governance.


## Overall Tone:


The discussion maintained a consistently collaborative and constructive tone throughout. It was professional yet accessible, with speakers building upon each other’s points rather than debating. The tone was solution-oriented and practical, focusing on sharing concrete examples and lessons learned from field implementation. There was a sense of urgency about addressing digital divides while maintaining optimism about the potential for multi-stakeholder partnerships to create positive change. The conversation remained respectful and inclusive, with clear efforts to bridge different perspectives from government, international organizations, academia, and civil society.


Speakers

– **Neil Wilson** – Chair of the session, from the cyber policy department of the UK Foreign Commonwealth and Development Office


– **Alessandra Lustrati** – Head of the Digital Development Cluster in the Foreign, Commonwealth and Development Office, Senior Private Sector Development Advisor


– **Samantha O’Riordan** – Based at the ITU in Geneva, part of the ITU Development Sector, working on connecting the unconnected


– **Leonard Mabele** – Leads research and innovation at the African Advanced Level Telecommunications Institute (AfralT) based in Nairobi, PhD candidate at Strathmore University


– **Luzango Mfupe** – Professor, Chief researcher at the Council for Scientific and Industrial Research (CSAR) in South Africa, research focus on connecting the unconnected


– **Maria Paz Canales** – Head of Policy and Advocacy at Global Partners Digital, a civil society organization based in the UK working globally on human rights considerations in digital policy


**Additional speakers:**


– **Lea Kaspar** – Mentioned in the transcript as being introduced by Neil Wilson, but appears to be the same person as Maria Paz Canales based on the context and responses given


Full session report

# Summary: Building a Secure, Rights-Respecting Digital Future Through Collaborative Governance


## Introduction and Session Framework


This IGF open forum discussion, chaired by Neil Wilson from the UK’s Foreign Commonwealth and Development Office cyber policy department, took place during the 20th anniversary of IGF and following the recent adoption of the Global Digital Compact. The session brought together representatives from government, international organisations, academia, and civil society to explore collaborative solutions for developing a secure, inclusive, and rights-respecting digital future.


Wilson framed the discussion around fundamental questions: How can we ensure that all voices, especially those historically excluded, are heard in shaping our digital future? How do we connect the unconnected whilst balancing innovation with rights protection?


## The UK’s Digital Development Framework


Alessandra Lustrati, Head of the Digital Development Cluster at the Foreign, Commonwealth and Development Office, outlined the UK’s approach to digital development through three interconnected pillars, emphasising that digital transformation should encompass government and society broadly, not just economic transformation.


The first pillar, **digital inclusion**, addresses meaningful connectivity beyond physical access, including digital skills development, relevant content creation, and accessibility for underserved communities. Lustrati stressed that digital skills development must always include cyber hygiene awareness and online safety tools.


The second pillar, **digital responsibility**, focuses on managing risks including cybersecurity threats, online safety concerns, and technology-facilitated gender-based violence (TFGBV), with emphasis on prevention rather than merely responding to consequences.


The third pillar, **digital sustainability**, considers the environmental impacts of digital technologies.


The UK’s Digital Access Programme, implemented in partnership with the Association for Progressive Communication (APC), works across five countries—Brazil, Indonesia, Kenya, Nigeria, and South Africa—and has reached 15 million people in over 5,000 communities. The programme emphasises community networks that start from understanding local needs rather than imposing top-down solutions, supporting local tech entrepreneurship and prioritising local organisations in delivery models.


## Global Connectivity Challenges: ITU Perspectives


Samantha O’Riordan from the ITU Development Sector highlighted that 2.6 billion people remain offline globally, with the majority in Africa and Asia. She distinguished between basic connectivity and meaningful connectivity, noting that whilst 97% of the world has mobile network coverage, a significant usage gap remains.


Meaningful connectivity requires “a safe, satisfying, enriching, and productive online experience at an affordable cost.” The usage gap persists due to affordability issues and lack of digital skills, awareness, relevant local content, and trust in online services.


The ITU has established 24 computer incident response teams and developed national cybersecurity strategies in multiple countries. O’Riordan emphasised that cybersecurity must be foundational to digital development, noting that least developed countries and small island developing states lag 10+ years behind in cybersecurity capacity. She also mentioned the upcoming World Telecommunications Development Conference in Baku.


## African Perspectives: Kenya’s Innovation Approach


Leonard Mabele from the African Advanced Level Telecommunications Institute discussed Kenya’s National Digital Master Plan, which emphasises affordable meaningful access, digital skills development, innovation, and digital government services.


Mabele highlighted connectivity challenges in underserved areas, noting that regions like Ustia County only have 2G/3G access despite high population density. He questioned whether accurate population data exists for planning purposes, suggesting underserved populations may be systematically undercounted.


Kenya’s approach emphasises spectrum innovation, including TV White Spaces (with framework development since 2020) and spectrum sharing to reduce connectivity costs and enable last-mile access. Dynamic spectrum access and Wi-Fi 6E in the 6 GHz band can enhance capacity for underserved communities. These technical innovations are coupled with community-focused approaches considering local needs, particularly in agriculture.


## South African Experiences: Community Networks and Affordability


Professor Luzango Mfupe from the Council for Scientific and Industrial Research in South Africa noted that whilst South Africa has achieved 78% internet connectivity, only 14.5% have fixed internet at home, highlighting significant infrastructure gaps.


Mfupe provided a striking affordability analysis: data costs in South Africa represent 10% of the average household food budget, meaning rural families must choose between connectivity and basic necessities like bread.


South Africa has focused on spectrum innovation around 3.8-4.2 GHz and lower 6 GHz bands to enable affordable connectivity. The country has supported 13 small-medium enterprises led by women, youth, and persons with disabilities to deploy rural connectivity, connecting over 70,000 rural users daily through community-based initiatives that include both technical and business model capacity building.


## Civil Society Perspectives: Human Rights and Participatory Governance


Lea Kaspar introduced Maria Paz Canales from Global Partners Digital, who emphasised moving beyond top-down approaches to embrace participatory governance models. Canales argued that effective digital transformation requires local communities to be involved from the design stage rather than being passive recipients.


She stressed that “the only way to effectively respond to local community needs and realities is to have digital transformation policies produced and discussed at the local level with relevant actors,” including traditionally marginalised communities. This approach requires ongoing assessment of technology’s impact and establishing oversight mechanisms that can adapt to changing circumstances.


Canales emphasised that participatory processes must be meaningful rather than tokenistic, ensuring marginalised groups have genuine influence over decisions affecting them.


## Key Areas of Agreement


Several areas of consensus emerged among speakers:


– **Meaningful connectivity** requires more than basic access—it must provide safe, satisfying, enriching, and productive online experiences at affordable costs


– **Community-based approaches** are essential for sustainable connectivity, with development being community-driven and responsive to local contexts


– **Spectrum innovation** and dynamic sharing are crucial for making connectivity more affordable and accessible


– **Digital skills development** must integrate safety and security awareness from the outset


## Technical Innovation and Partnerships


The discussion highlighted the importance of partnerships, with specific mention of the Dynamic Spectrum Alliance as a key partner in spectrum sharing initiatives. Technical innovations discussed included TV White Spaces, dynamic spectrum access, and community network models that reduce costs while maintaining quality.


## Ongoing Challenges


Several challenges remain unresolved:


– Balancing innovation promotion with preventing harms such as cybersecurity threats and disinformation


– Accurate population mapping in underserved areas for better planning


– Sustainable financing mechanisms for long-term digital infrastructure in rural communities


– Addressing the persistent usage gap even where network coverage exists


## Conclusion


The session demonstrated alignment around principles of inclusive, responsible, and sustainable digital transformation. The practical experiences shared—from the UK’s multi-country programme to Kenya’s spectrum innovations and South Africa’s community networks—provide concrete examples of collaborative approaches that combine technical innovation with community engagement.


The discussion emphasised that building a secure, rights-respecting digital future requires moving beyond technical solutions to embrace participatory governance models that ensure historically excluded voices are heard in shaping digital transformation. The collaborative approaches explored provide a foundation for continued progress, though sustained commitment to multi-stakeholder collaboration and community-centred development remains essential.


Session transcript

Neil Wilson: Good morning, everyone. Thank you so much for joining here in person and online. And welcome to this IGF open forum on developing a secure rights respecting digital future. My name is Neil Wilson. I come from the cyber policy department of the UK foreign Commonwealth and development office. And I’m delighted to be chairing this session alongside such an esteemed panel at such a pivotal moment in global digital governance. And you will all have heard a lot this week about the critical juncture we find ourselves at here at the 20th anniversary and indeed the 20th edition of the IGF. Following the adoption of the global digital compact and amidst the WSIS plus 20 review, both the scale of the challenge and the urgency of addressing it has arguably never been clearer. We have been continually reminded this week as in our kind of daily lives that the digital world is no longer a separate space. It’s the very infrastructure of our economies, our societies, our daily lives and how we govern these technologies is critical to how we govern ourselves and especially for those of us undergoing digital transformation. Indeed, we’re in a period of immense change and it shows no signs of slowing as we embrace new and emerging technologies. And with this transformation comes a really complex web of challenges. Cyber security threats, widening digital. Digital Divides, Ethical Dilemmas in AI, the urgent need to ensure ultimately that digital transformation respects human rights and promotes inclusion. But this session is about more than just identifying problems. In line with this year’s IGF theme of building governance together, this session is about exploring collaborative, inclusive and accountable solutions. So today we’ll be asking a wide range of questions. How do we ensure that all voices, especially those historically excluded, are heard in shaping our digital future? How do we connect the unconnected? How do we balance innovation with rights protection? How can we build resilient, rights-respecting digital infrastructure that serves everyone everywhere? So to help us unpack these questions, I’m joined by an outstanding selection of panellists who I will actually ask to introduce themselves, so it’s not just me speaking at this top section. So to my right, we have Alessandra Lestrati. Alessandra, do you want to introduce yourself?


Alessandra Lustrati: Absolutely. Thank you so much, Neil. Good morning, everybody, online and in person. Thank you for waking up this early to join us. I’m Alessandra Lestrati. I’m the head of the Digital Development Cluster in the Foreign, Commonwealth and Development Office. I’m also a Senior Private Sector Development Advisor. Thank you.


Neil Wilson: Thank you, Alessandra. I think online we have Samantha O’Riordan from the ITU. Samantha, can you hear us?


Samantha O’Riordan: Yes, I can hear you. Thank you. So my name is Samantha O’Riordan. I’m based at the ITU in Geneva and I am part of the ITU Development Sector and working actually with Alessandra. We have a partnership working to assist several countries in connecting the unconnected.


Neil Wilson: Thank you, Samantha, for joining us. Great to see you. Also joining us online, we have Leonard Mbale. Leonard, are you with us?


Leonard Mabele: Yes. Hello, Neil. Hello, everyone. I hope you can all see me. My name is Leonard Mbele. Greetings from Kenya. And I lead the research and innovation at the African Advanced Level Telecommunications Institute, AfralT, which is based in Nairobi. And I’m also a PhD candidate at Strathmore University. And with the two institutions, that is AfralT and Strathmore, we’ve been working closely with Alessandra through the Digital Access Initiative at CDO. Looking forward to speaking more.


Neil Wilson: Thank you, Leonard. Great to see you. Next, we have Professor Luzango Mfupe. Professor, can you hear us?


Luzango Mfupe: Yes, now. Good morning, colleagues. I’m Luzango Mfupe. I’m a chief researcher at the Council for Scientific and Industrial Research, CSAR, here in South Africa. My area of research interest is connecting the unconnected. And I’ve been working with Alessandra and FCDO in a number of initiatives. Thank you.


Neil Wilson: Thank you, Professor. And now returning to the room for our final panelist here in Oslo, Lea Kaspar.


Maria Paz Canales: Thank you very much for the invitation from FCDO to be here speaking with you. I’m Lea Kaspar. I’m the head of Policy and Advocacy at Global Partners Digital, a civil society organization based in the UK, but working globally with partners across different regions in underpinning human rights consideration in digital policy.


Neil Wilson: Thank you, Maria. So each of our panelists really brings quite a unique lens to this conversation, and as do you, our audience, both here in person and those of you joining online from all around the world. And it’s really wonderful to use this opportunity to bridge these different perspectives, encourage some multi-stakeholder dialogue from across government, industry, academia, the technical community, and civil society. So together, we’re going to explore how we can shape a secure, inclusive world. and rights respecting digital future and just a very quick note I suppose on how we will run this session to kick things off each of our panelists will provide some opening remarks on the principles they see as most relevant to developing a secure rights respecting digital future and then we’ll dive into discussion as a panel before we open it up to our audience here both in the room and those of you joining online so without further ado Alessandra and would you like to kick us off with the principles you see as most relevant to developing a secure and rights respecting digital future.


Alessandra Lustrati: Thank you so much Neil I’ll try to see whether I can oh they’re opening up my presentation Thank you so thank you tech I guess you can go into presentation mode okay and so good morning again we’ve started getting to know each other so I hope you’re looking forward to this kind of quite diverse set of sort of contributions my task this morning is to provide you with an overview of the approach of the UK government to digital development and as usual before we say the what we do and how we do it it’s always good to ask why we do it even in this early in the morning so why normally when we reflect on these things we try to organize them on four different levels first of all and apologies that maybe the text is a little bit small for reading but I’ll run through the key concepts for you and so first of all we think about the fact that digital transformation is now widely recognized as an absolutely key enabler of social and economic development even an accelerator of the SDGs as many people like to say and this has been further sort of accelerated and amplified also by the you know upcoming technologies and including AI that is already there with us and then of course the let me just see enabling development yes I can’t read my eyesight is very bad so all of this can enable development at different levels and But at the same time there are Problems that we need to solve. So the first one is the digital divide We know that 2.6 billion people are still offline in the world But we have other divides plural including on digital skills and access and accessibility of digital content and services We also have specific things like the gender digital gaps and many of you are very very familiar with these issues On top of the divides we need to think about the risks and as you all know risks that have been developing and accelerating over time Include, you know cyber security threats online safety risks But also those risks that again AI has amplified and is amplifying like misinformation in this information So these are the kind of four levels that would sort of in a way justify why we want to work on digital development But what is digital development for us? So the definition of digital development for FCDO is that we want to support our partner countries in achieving an inclusive Responsible and sustainable digital transformation. It’s quite a mouthful. So I’m going to unpack it for you by using our policy framework which Is here very colorful So digital development is quite a complex concept and I know that different colleagues and stakeholders in the IGF and beyond Define it in many different ways. We find this way of articulating our thinking quite useful and we’ve developed this policy framework You know based on experience of various, you know Quite a few years of working with partner countries in trying to promote Sort of the use of digital technologies to advance development. So when we think about digital transformation, we are actually referring to digital transformation typically of the economy as people think of Spontaneously, but also very much of government and of society in the broad sense of the term So it’s a very broad approach to digital transformation However, we don’t want to let’s say promote a digital transformation just for the sake of it. We want it to be inclusive responsible and sustainable so under the pillar of digital , we are a global organization that is focused on digital inclusion, we focus of course on the foundational block without which we cannot do anything, which is inclusive and sort of affordable and sustainable connectivity, especially connectivity at the last mile and for the most sort of underserved, but also within that same bucket of digital inclusion, we look at the situation of specifically underserved communities of course but also marginalized


Neil Wilson: communities and how they can be connected, but also how they can access digital content and services that are relevant to them and how they can develop and use digital skills at different levels so that digital connectivity becomes meaningful to them, productive and really make sense for their context. And then going on to sort of manage the risks is the bucket of digital responsibility within which we include all our work, of course not just our work as the UK, but the work with all the partners and stakeholders that we collaborate all the time with, we focus on cyber security capacity, building cyber hygiene awareness, but also the promotion of online safety, we have also very strong emphasis on technology facilitated gender-based violence, unfortunately it’s a phenomenon that has been growing over the past few years, and also the importance of data protection, and you know, again with the advance of AI, data, and the protection of data and the use of data and the transparency of it is becoming an even more critical issue. And last but not least, we have the digital sustainability, this is a pillar that we added to our framework a couple of years ago, it’s a bit more recent, like many other organisations around the world have started to think about, yes, digital transformation is really critical, it’s really important, it brings a lot of benefits, we can leverage it in a positive way to provide digital tools and platforms for solutions on climate change adaptation and resilience for local communities, however, there is also a clear environmental impact, and this might be a bit complicated, but digital transformation is about the data, it’s not the technology or the technology is the data for the solution. It will be a big and then we go into last part of my presentation is on actually giving you a practical example of how we apply all of this thinking and sort of our policy and strategy approach and I will focus only specifically on our what we call our sort of quote-unquote flagship program on digital access. The digital access program is a partnership between FCDO and DCIT, the Department for Science Innovation and Technology. We work together across government to promote three pillars of work. The first one is on digital inclusion, pillar one and it basically works at two levels. One is the level of the policies, sort of regulatory frameworks and standards that create that enabling environment, that system-wide change that can sort of support and enhance digital inclusion but at the market and community level we specifically focus on testing technology and business models that can enable first of all that kind of famous inclusive, meaningful, affordable, sustainable connectivity at the last mile but also all the other models that can help with digital skills, access to content and services etc. The second pillar is the kind of trust and resilience so it kind of maybe makes you think back of what I explained as digital responsibility with a lot of emphasis on cyber security capacity building but also work on online safety and data protection and last but not least pillar three is about now taking all of this work, creating that sort of


Alessandra Lustrati: positive environment for the local digital economy and ecosystem and specifically supporting those forms of tech entrepreneurship in the local digital economies of our five focal countries that you see listed at the top, Brazil, Indonesia, Kenya, Nigeria, and South Africa, to basically facilitate and sort of stimulate digital innovations that are useful for local development challenges. And this also creates opportunities for, obviously, business partnerships and investment collaborations across the border as well. Obviously, from those five countries, today we have with us, obviously, Kenya and South Africa represented extremely well. We work also in the other three, and actually we are amplifying the work of the Digital Access Programme or DAP to the regions by just sharing knowledge and just on a demand basis disseminating the models that we demonstrated over the years. And I will just say to conclude that top-line results of the programme have been so far that we’ve reached 15 million people over five countries in over 5,000 communities, or 555, it’s quite easy to sort of remember, where we have sustainably improved the digital inclusion of people in these communities. But you could think that 15 million people, actually 2.6 million people is just a drop in the ocean. What is important is not so much just the number of people that we reach in a sustainable way, but it’s actually the models and the practices that we try to demonstrate with a multi-stakeholder approach, and how all of this gets embedded through a lot of sort of capacity building and technical assistance that really enables local organisations to then take forward, and local stakeholders to take forward that work. I should quote that delivery model is very flexible and agile, and I should say that we give huge priority to working with local organisations, but we also have fantastic global partners. Of course, the ITU is on the line with us, and you’ve heard from the other partners in Kenya and South Africa, you’ll listen to them in a minute. We also work with the Dynamic Spectrum Alliance, the British Standards Institute and the British Council on the various aspects of the programme. So I will stop here and I hope that this gives you enough of a framework of our thinking of digital development and also the overview of the programme and now you will hear more specific presentations on some of the activities. Thank you so much.


Neil Wilson: Thank you Alessandra, a really comprehensive overview of the UK approach there and I’m sure plenty of material for some really rich discussion to follow. So turning now to Samantha O’Riordan from the ITU. I’ll turn over to you for your opening remarks on how we can develop a secure and rights-respecting digital future. Thank you.


Samantha O’Riordan: Thank you Neil and good morning everyone. I represent the International Telecommunications Union, ITU, which is the UN specialised agency for information and communication technologies and this year ITU is proud to turn 160 years old. But even since the beginning of the ITU, there have been concerns about trustworthy communication and about interference. Back in the day it was interference of cable but now things have moved on. Technology has progressed with the rise of new technologies such as AI and quantum computing. Cyber security has become foundational to digital development and should be part of every layer of technological advancement to ensure trust and resilience. It is important to note that as we progress in this digital age that there are still, as Alessandra mentioned, 2.6 billion people who are offline and of those people who are offline, the majority of them can still be found in Africa and Asia. So it is a disproportionate spread and it’s also just important to note that while we talk about coverage and and we say that I think it’s now between about 97% of the world is covered by a mobile network. There is also a usage gap and there are many reasons why there is and still remains a usage gap and this is often the primary two of the primary reasons are down to affordability but also a lack of digital skills, awareness, knowledge, maybe local content and trust. It is important that people feel safe and secure online. That is why ITU has been part and supported the UN targets on meaningful connectivity for 2030 which state that it’s important for those who have connectivity to have meaningful connectivity and by meaningful connectivity it means that users have access to a safe, satisfying, enriching and productive online experience at an affordable cost. So in terms of ITU-D and what we do and how we support countries, we have been supporting countries with enabling policy and regulatory environments, helping them to create those through research, capacity building and awareness raising. Also we have been promoting inclusive and secure telecommunications for sustainable development. On the topic of cyber security, ITU has been at the forefront of capacity development for over 20 years through the WSIS Action Line C5 and World Telecommunications Development Conference in 2006. Even though it’s been a decade of the Global Cyber Security Index, you can see today the challenges persist in least developed countries and small island developing states, which are often more than 10 years behind other developing countries. So just to give you a few examples of how ITU is supporting countries to help ensure safer environments. So ITU has helped establish 24 computer incident response teams. And over the past two years, ITU has worked with seven different countries to establish national cyber security strategies through training workshops, discussions in countries. Since 2022, ITU has worked with over 50 partners in 30 countries to train over 170,000 children, 2,500 parents and educators and over a thousand governments and stakeholders on child online protection. And it’s not just about keeping children safe online. It’s also thinking as well about the experience of women and supporting women online with initiatives such as women in cyber and her cyber tracks, making sure that women are also trained and able to connect safely. And lastly, just to mention that in May, ITU organized a global cyber drill in Dubai with over 136 countries participating. As Alessandra mentioned, we have been working with FCO in particular, the digital access program to promote effective regulation, greater investment and innovative models for connectivity in underserved communities in five countries, which are Brazil, Indonesia, Kenya, Nigeria, and South Africa. So the work has included policy guidance and recommendations for regulators. research into last-mile and alternative access solutions and digital inclusion research and training. Examples of the work that we have been doing include collaborative regulation studies in Kenya, Nigeria and South Africa, development of universal service financing efficiency toolkit and training, and digital skills assessments conducted in Kenya and Nigeria. We have been supportive of making sure that when connectivity reaches those underserved communities, they still have a safe online experience. Finally, I just wanted to mention that with the upcoming World Telecommunications Development Conference this November in Baku, ITU-D will continue to deepen our commitment to leaving no one behind and ensuring meaningful connectivity, ensuring that populations have the relevant skills and countries have the tools and partnerships needed to ensure that their populations thrive securely in the digital age. Thank you.


Neil Wilson: Thank you very much, Sam. Much appreciated. And really interesting there to hear about all the work that the ITU is doing, not just on improving connectivity, but meaningful connectivity, of course. And I know this is the topic of kind of innovation as well as tech connectivity is of keen interest to our next speaker, Leonard Mavelli. So, Leonard, I will pass now to you for your opening remarks.


Leonard Mabele: Yeah, once again, thank you very much, Neil. And great to be here again. I’m just going to share a small slide I have here. Yeah, Neil, I can see you on my screen. Just let me know if it’s all clear in the room. Yep, we can see that in the room. Thank you. Perfect. So just building on to what Alessandro was mentioning earlier about digital access initiatives, I’m just going to present a glimpse of what’s going on in Kenya, what sort of activities are happening in Kenya and pretty much from what some of the government initiatives are looking at digital access across the country and some form of collaborations within the region as well as the projects that we are working on or have already worked on under the Digital Access Initiative, a program by FCDO. So from 2018 to 2021, the government was working together led by the Communications Authority of Kenya, developed the National Broadband Strategy, pretty much saw a growth and a bit of multi-stakeholder participation in delivering access and connectivity to different sort of stakeholders, in this case to schools, to rural locations, etc. But still, while significant results were achieved out of it, there were still some challenges realized and in this case was delivering last mile access requires more like a holistic approach to things, which in this case includes digital skills, includes devices, a focus on power. So that National Broadband Strategy, having lived its time until 2021, the government enacted a new plan with the Ministry of Digital Economy. The new plan is the Kenya National Digital Master Plan, the one I’m just presenting right now. And in this case, the most affordable access and maybe borrowing some of the words from Samantha making sure that the affordable access is also meaningful in some of the places that are still very much underserved and in this case most of them are in the rural areas. So if you’re looking at the Kenyan map and which was on my first slide we do have counties as large as the size of Netherlands and some of them are larger as you know kind of combine two countries in Europe and with that kind of geography different models are needed and also different approaches to reaching the last mile are required and that forms a very key strategic pillar for the ICT for the Ministry of ICT and so digital infrastructure is really key and looking at how that can look at also innovative ways that enable success meaningful access to the last mile and then while looking at that and fleshing out from the previous social broadband strategy is also the focus on digital skills. So in this case ensuring that the underserved population see value in what connectivity means and in this case underserved might include the rural youths also includes a significant fraction of women in the rural communities that may not see their own value or do not have yet the understanding of the opportunity that ICT presents to them. So that’s a very key pillar that the ICT the Ministry of Digital Economy has also put in a lot of ways to ensure some key objectives are achieved there and then while looking at that of course infrastructure being window or an opportunity to unlock more entrepreneurship activities and enable more developments in the industry 4.0. Opportunities such as internet of things, we are all familiar with the conversations on AI now. So digital innovation is also one of the key pillars that the Ministry of Digital Economy has also fleshed out to be able to see a lot of work happen, not only in the urban and suburban areas, but as well as the rural communities. In this case, just remembering that if you’re in Kenya, you’ll find a lot of digital hubs within the cities and there’s so little of that in the rural areas. So hence this sort of pillar trying to see how we create that digital, how do we bridge that digital divide, particularly when looking at the aspect of digital innovation. And of course, while some of the government services have come online and Kenya has been pretty active on that within the region, there’s still some that are not online and they still also are part, a significant fraction of the people. In this case, again, underrepresented groups and rural communities made up of women that are not having access to the digital government services. So again, the Ministry of Digital Economy has created this as a separate pillar to have new services that are meant to reach as many people as possible, becoming digital and accessible. And at the same time, the ones that are available being made to reach the underrepresented groups. Now, building onto this, I’m just moving to my next slide, which in this case speaks onto the work that is contributing to what these pillars or the strategies the government has already put out there. And one of the developments that we have ongoing and we pretty much added from way back in 2020, when the communications authority developed the framework for TV white spaces has been the work on spectrum sharing. So beyond the legacy models of delivering access to rural areas and legacy models that deliver connectivity, particularly through cellular connectivity or delivering fiber infrastructure, there’s been the need of looking at the… to deliver this last-mile access and also enable innovation. So, of course, with TV White Spaces forming as a foundation, what chunks of work we’ve developed over time after that has been the immense collaboration that has been going on and still going on with the Dynamic Spectrum Alliance and FCDO alongside Communications Authority and other stakeholders to look at what other opportunities of spectrum sharing can be considered or can be looked into to enhance capacity for Internet access. And, of course, at the same time, enhance access to the Internet. So, one of that has been the work on Wi-Fi 6E, the work in the 6 GHz band in 2022-2023, led by Strathmore University. Of course, at DSA, we worked on the coexistence studies for Wi-Fi, and in this case, helped develop what has become like a guideline on the lower band of the 6 GHz band that the Communications Authority published to look at ways of enhancing Wi-Fi capacity. Of course, some of the conversations happening now is to be able to see how the upper part can be, again, be adopted or used to be able to enable more access to Wi-Fi or more capacity for Wi-Fi. During the study on the coexistence work, we did not focus on the lower part. Unfortunately, we looked at the whole band, which the Communications Authority at this point is just evaluating how that can be extended. And beyond that, of course, what we’ve been able to work on between 2023 and last year was the work to see an opportunity or evaluate an opportunity of having non-public networks deployed in places that are underserved. And in this case, support last-mile Internet access, particularly maybe through private LTE or private 5G networks, have community networks deployed in places that are underserved. And that’s part of the new initiatives of pretty much combining enabling affordable access and supporting last-mile Internet access. And of course, at the tail end of this development is to have new policies come out that are able to support the commercial rollouts of this sort of networks and more sustainable models that this. on various other initiatives and even at the moment of course there’s work going on led by DSA with Strathmore University to be able to have like a dynamic spectrum access certification program that can enable now the internet service providers understand the opportunity of spectrum sharing and the approaches that they will probably go forth to collaborate with other stakeholders to deliver this sort of models of infrastructure to support the last mile networks. And beyond that there’s also the aspect of just having the understanding of the different topics particularly on cyber security as of course as we push for the access to the underside we also understand the vulnerability that comes with it just as Samantha was saying earlier and of course we are keen that cyber security skills are developed and also with the aspects of data protection are made to be understood by the state agencies as well as the private sector and a plethora of other digital skills program that are presently running in the country. Of course we do have right now a fiber program that has been developed to be able to have community networks understand that they could also deploy their networks through fiber infrastructure.


Neil Wilson: Professor Lasango, please take it away with your remarks. Thank you.


Luzango Mfupe: Thank you now, panelists. So, my colleagues have already touched some of the aspects of topics I would like to cover, but to emphasize more on the need of broadband connectivity for development of any nation. So, here in South Africa, of course, just to give you an overview, in the last 10 years, the government has achieved quite a good success in terms of digital transformation, particularly in connectivity. For example, we are talking about almost 78 percent of the population has some form of internet connectivity. However, most of the population are connected via mobile networks. Of course, the picture is similar to my colleague in Kenya, and only around 14.5 percent of the population is connected via fixed internet at home. Of course, we do have a national development plan 2030, which calls for universal access for everyone, as we are aware that broadband connectivity can contribute quite a good percentage of GDP in developing countries. And there are some initiatives, for example, the Fourth Industrial Revolution and others are talking about the Fifth. But how can a nation achieve that if there is no affordable or ubiquitous access to the broadband? If you look at the United Nations Sustainable Development Goals, over half of those actually requires broadband access to be achieved by any country. So the fear is that if we proceed this way, where the few in the urban areas, for example, are just connected and the majority of people in rural areas are not connected, we might not achieve the UN SDGs. To give you more details, I would like to give you an analogy of affordability of data here in South Africa vis-à-vis the daily household food basket cost. Of course, the cost of data, of one gigabit of data, has gone down from around 89 Rand per gigabyte to around 33 Rand, which is around 1.8 US dollars. However, if you compare that to the average household affordability of food, this is around 1.8 US dollars. 10% of that. So a rural household owner will have to debate whether they should buy data or put bread on the table and also afford other things. So this has always been a challenge. How can one reduce the cost of connectivity? And this is the focus of the research that we are doing at CSAR, trying to reduce the cost of connectivity. So to give you an idea of what we’ve been working in the past 10 years, we’ve been trying to develop technologies and solutions that will allow rural communities to connect to the broadband affordably. And some of the initiatives in the area of reducing the cost of accessing spectrum, because we are aware that spectrum contributes immensely in the total cost of ownership for any operator, particularly the wireless operators like mobile networks. So one of the solutions that we’re looking at is through the innovative use of spectrum by dynamically sharing it. And in the past 10 years, for example, we worked with a regulator, ICASA here, to come up with the regulations that will allow operators, big and small, to access spectrum in the broadcasting band, this so-called TV wire space. And by 2018, in March, that was achieved. The regulator here published the regulation, the use of TV wire space. currently working with the regulator on the so-called innovation spectrum. This is the spectrum around 3.8 gigahertz to 4.2, as well as lower 6 gigahertz, the so-called Wi-Fi 6. So in this regard, we’ve been working very closely with the FCDO. Firstly, in enabling the ICT forecasts, SMMEs owned by women, youth and persons with disability, in taking advantage of the spectrum which is now available in the TVB band, as well as the one that we are busy working to get the regulations in motion. Since around 2020, the FCDO and CSER have collaborated in supporting around 13 small-medium enterprises, forecast SMMEs, in deploying affordable connectivity in rural areas. Connectivity in rural areas, around five provinces have been reached through this program, and over 70,000 users in rural areas are connected to this initiative daily, over 200 and other partners have been providing capacity building to these beneficiary SMMEs in terms of technical and business models, so that they can be sustainable beyond the support that we’re providing to them. Maybe I should stop here. Thank you, Nell.


Neil Wilson: Thank you very much, Professor. Really interesting there, and I think already across this conversation, if you’ll excuse the pun, we’re really running the full spectrum of all the way from local regulatory frameworks and environments all the way up to the global normative initiatives, which I think also kind of leads us on quite nicely to our next panellist, Maria, who is going to be representing the civil society voice on this, and will hopefully be able to provide a bit more flavour on these topics, particularly from that sort of normative angle and in terms of our rights respecting approach to these issues. So Maria, please.


Maria Paz Canales: Thank you so much, Neil. Thank you for all the presentation, as Neil just mentioned it. The idea of my intervention is to try to complement a little bit a different angle of what you already have heard. We have heard a lot in the previous presentation around how to ensure connectivity in a sustainable way, in a meaningful way, but a complement of what it means, what also was in the pillars presented initially by Alexandra, of having inclusive and responsible. and sustainable digital transformation strategies look into the the aspect of like what means unpacking having effective and inclusive participation of different stakeholders at the local level involved from the very beginning in in the design and the deployment of these different strategies and this is something that has been part of the core work of the global partners digital organization across the years we have been working with partners in different regions around the world and trying to support them to unpack in their own work at the local level and working in collaboration with local authorities in in setting kind of the elements of what it means and what are the benefits of having participatory process of design and implementation deployment of of digital strategies so apply it particularly to the context of connectivity also i think samantha was referring to this uses gaps many times when we start with the deployment of this digital transformation strategies or implementation of this project we we focus very heavily in the first part which is like ensuring that the population can be effectively connected have a broadband access access to the devices and increasingly we think also in digital skills as we have heard in many of the programs are essential part of of the implementation but then there is an additional layer when we increase the sophistication and we provide tools to the local actor to engage meaningfully with this policy so one element that i would like to see more in the future probably for for ratios of the strategy is like where it sits like the more participatory angle that can be a little bit part of the inclusive approach but also it’s part of the responsible approach because we we only know The only way to be effective in responding to the needs of the local communities and the local realities and the context is to have by design the digital transformation policy being produced and discussed at the local level with the relevant actors, with the traditionally marginalized communities also because we need to go beyond the top-down approach of seeing that we are providing certain technologies and certain skills to certain populations but we also need to learn what are their needs, what are their ways in which they start to engage and how technology starts to transform the social life at the community level and all those elements should be taken into account when we are talking about digital transformation that is really conscious and right for expecting for the exercise of the best measure for human development and the final angle that I would like to include on that because maybe probably after we will not have much time for discussion is what is the relevance of the connection between the local developments and the global guidance I have seen in the work of Global Partners Digital that many groups at the local level struggle to show their local governments, their local policymakers the benefit of looking into digital transformation policy with this human rights angle approach and also even for example in the engagement of companies that look at many of the developing countries that are engaging in digital transformation strategies as markets but not necessarily are willing to offer the same level of protection that are offered in the digital transformation but also of the interaction and feeding the local perspective in terms of how to design meaningful, responsible and sustainable digital transformation at the local level. We try to be good partners, as our name says, Global Partners Digital, in trying to bring that to the different groups working on the ground. So if you want to have more access to information, I encourage you to visit our website and there are many materials and I’m happy to be in contact with anyone that can benefit from some of the projects that we have implemented in that sense. I’ll stop there for now. Thank you, Neil.


Neil Wilson: Thank you so much, Maria. I really appreciate it. So we have just a little bit of time left now for questions. So if anyone here in the audience would like to ask a question, please do go up to one of the microphones on either side of the stage. And all I’d ask is when you do, please state your name and any organisational affiliation you’re with. And if you’d like to address your question to any of the panellists. We also, of course, have online participation. This is a hybrid session and I believe there’s a Mentimeter attached to that. So I’m just going to double check the chat and see if there’s any questions. If there are none in the room, or indeed online, then I am more than happy to ask a question to the panel here. I think we’ve really covered quite a broad spectrum of issues here and running the full gamut of… Digital Development. And it’s really clear I think as a kind of recurring theme from the conversation we’ve had is the need to not just ensure connectivity but meaningful connectivity and ensuring that it’s not just you know technology for the sake of technology but ensuring it’s actually rights respecting and of course part of that is that we are hearing a great deal about the kind of risks and issues that are that are created obviously with the adoption of new technologies so I suppose there’s a bit of an open question to the panel and I you know welcome perspectives from anyone but really interested to hear a bit more on this balance between kind of innovation and increasing connectivity so increasing connectivity and balancing that with the need to prevent new harms such as Alexandra you mentioned tech facilitated gender-based violence cyber crime disinformation of kind of all all appeared I’d welcome any thoughts from that on the panel as we as we rapidly approach the end


Alessandra Lustrati: shall I shall I jump in shall I break the ice before I jump back into the risks allow me to stress one point because there’s been a theme through the conversations and presentations of this morning which is the approach of community networks to increase not just affordable and inclusive but also meaningful connectivity I just wanted to stress that we work with the Association for Progressive Communication to actually deepen that approach and that is an approach actually that’s because it’s community-based it really starts from as Maria indicated very importantly I’m avoiding the risk because this is also a risk of exclusion of being very top-down in the way we propose technological solutions and actually hearing first of all from the community what are their needs but also their potential and their ambitions and I think the APC has been done a fantastic job as a global partner across the five countries of the doubt but also beyond there are global partners who encourage anybody who is interested in community networks approaches to approach to approach APC and get more information that we did a great session with them yesterday as well and there’s a new publication also that we’ve launched So after this plug on community networks, because it’s one of my passions, going back to the risks part, maybe I’ll just focus on TFGBV, because I would just want to give space to the others. So we’ve done a lot of work on not just promoting approaches to online safety, to promoting online safety, which always has to go both at the level of regulatory frameworks, but also the capability of the users themselves. We always build into digital skills development trainings, always cyber hygiene awareness and online safety sort of tools and sort of skills. But at the same time, when it goes specifically into technology-facilitated gender-based violence, the theme becomes more complex. And so we’ve done additional research to try and really understand the drivers of where that comes from. And what can we do also to kind of prevent that dynamic rather than only deal with, try to support the sort of survivors, so to say, to deal with the consequences. I’ll stop there. But just to say that if anybody wants to know more on that approach on preventing TFGBV, we also have a global partnership of the UK with different countries around the world on this. Please come and talk to me or other FCDO colleagues and we can tell you more on that. I’ll just stop on that. Yeah, thank you. Thanks a lot.


Neil Wilson: Thank you, Alessandra. We have just a few minutes left in case anyone has any questions. So please do feel free to approach the microphones if so.


Maria Paz Canales: I can jump in also with that. Just a compliment also from my previous remark, I think that one essential element in terms of ensuring this human rights approach that I am advocating for and address in an effective manner the potential risk or the harms that come from some of this very relevant development. We need to acknowledge that they all have negative sides also, as well as they bring a lot of potentiality for improving human life. They bring new risks and new challenges. It’s precisely enhancing at the same time that we’re enhancing the access to a technology and the ability of the people to meaningfully interact with technology, enhancing the other structure, the institutional structure, the normative structure that will allow particularly to be able to track in an effective manner what is the impact that the deployment of technology is having on the ground. So usually when we focus in providing meaningful connectivity and more meaningful access, more luxury to technologies, we focus much more in the shiny object than in the potentiality of the negative impact that the shiny object can have. So what we advocate with the human rights approach is that we do both. Those are not some elements that are contradictory or a zero sum game. We can do both at the same time and they have a great benefit in terms of reinforcement, legitimacy, and ensuring that it’s not mission creep in terms of the original intention of the digital transformation policies that we are implementing. Because usually, I mean, I truly believe that there are good intentions behind the deployment of the technology always, but there are a lot of unknown, and they are known and not only coming from the nature of the technology itself. They come from the specific interaction that happens between specific technologies and local context and realities and cultural and social elements that are different from one place to another. So that is why, like, monitoring constantly how this is unpacking and being able to have mechanism in place to course correct, to have oversight and review and repeat in the cycle of policy assignment are fundamental for ensuring a human rights respected approach in the digital field.


Neil Wilson: Thank you Maria. I’m afraid I can’t see our online participants on the screen I have in front of me, so please do just chime in if you have anything to add. Yeah, they could just jump in, isn’t it? Yeah, thanks Neil.


Leonard Mabele: Yeah, I just probably wanted to share a bit of a little perspective. I see we only have a minute, so I had to keep it low. So I recently just came from, actually yesterday I came from the Ustia County, which is a county to the western part of Kenya and neighboring Uganda. And moving around that county, probably I found it to be the most underserved in terms of cellular access in the country since I’ve been moving around. It has, going by the words of the Alliance for Portable Internet Access and ITU’s definition of meaningful internet access, the best connection you will get in most of the places as you go deeper is 3G. And most many other places I had edge, I mean 2G. And that’s something that I found baffling was just how many schools are next to each other in such regions. And when I started thinking about it, I realized that the population reported by the National Bureau of Statistics of that county, somehow practically it’s not correct. It makes me doubt if we really have the right figure of the global population as 8 billion or there are more people that we are not really counting as we think about this stuff. So looking at the intersection of connectivity and innovation was something that struck my mind was we have an avenue to have community networks really function to deliver meaningful and affordable access in the rural areas. But beyond that, because of the conversations we were having with farmers, it was very interesting to note that they also need digital solutions. So I’m looking at the intersection of connectivity brought on by community networks and maybe community networks supporting also innovation to support the sort of sectors in some of the underserved areas. Sorry, I got into extra time.


Neil Wilson: But it’s not at all. Thank you so much. We really appreciate it. And yes, indeed, unfortunately, we are not only out of time, but slightly over time. So we’re going to have to wrap up there. But I think all that remains. Thank you so much to our panellists both here in the room and those online, all our participants here in the room and online. There will be a summary report of the session produced and published I believe to the IGF website so please do keep an eye out for that and I’m sure I speak on behalf of all of our panellists here in saying we’d be very happy to continue the conversation afterwards. So thank you again and we look forward to continuing the dialogue. Thank you. Thank you so much. Thank you everyone. Thank you. Thank you. Thank you.


S

Samantha O’Riordan

Speech speed

119 words per minute

Speech length

810 words

Speech time

407 seconds

2.6 billion people remain offline globally, with majority in Africa and Asia, creating disproportionate digital access

Explanation

Despite technological progress, a significant portion of the global population still lacks internet access, with the distribution being uneven across regions. This creates inequality in digital opportunities and access to information and services.


Evidence

2.6 billion people are still offline, with the majority found in Africa and Asia


Major discussion point

Digital divide and global connectivity gaps


Topics

Development | Infrastructure


Coverage exists for 97% of the world through mobile networks, but significant usage gap remains due to affordability and lack of digital skills

Explanation

While mobile network infrastructure covers most of the world, many people who could technically access the internet choose not to due to cost barriers and insufficient digital literacy. This highlights the difference between availability and actual usage of digital services.


Evidence

97% of the world is covered by mobile networks, but usage gap persists due to affordability and lack of digital skills, awareness, knowledge, local content and trust


Major discussion point

Meaningful connectivity versus basic coverage


Topics

Development | Infrastructure | Sociocultural


Meaningful connectivity requires safe, satisfying, enriching and productive online experience at affordable cost, not just basic access

Explanation

True digital inclusion goes beyond simply providing internet access to ensuring users can have a quality online experience that adds value to their lives. This comprehensive approach considers safety, relevance, and economic accessibility as essential components.


Evidence

UN targets on meaningful connectivity for 2030 define it as access to safe, satisfying, enriching and productive online experience at affordable cost


Major discussion point

Quality and value of digital experiences


Topics

Development | Human rights


Agreed with

– Leonard Mabele
– Alessandra Lustrati

Agreed on

Meaningful connectivity requires more than basic access


Cybersecurity must be foundational to digital development and integrated into every layer of technological advancement

Explanation

As digital technologies become more central to society and economy, security considerations cannot be an afterthought but must be built into the foundation of all digital development initiatives. This ensures trust and resilience in digital systems from the ground up.


Evidence

ITU has been concerned about trustworthy communication since its beginning 160 years ago, evolving from cable interference to modern cyber threats with AI and quantum computing


Major discussion point

Security as fundamental requirement for digital trust


Topics

Cybersecurity | Development


ITU has established 24 computer incident response teams and developed national cybersecurity strategies in multiple countries

Explanation

The International Telecommunications Union has been actively working to build cybersecurity capacity globally through practical initiatives that help countries respond to cyber threats and develop comprehensive security frameworks. This represents concrete action to address digital security challenges.


Evidence

ITU helped establish 24 computer incident response teams, worked with 7 countries on national cyber security strategies, trained over 170,000 children and 2,500 parents/educators on child online protection since 2022, organized global cyber drill in Dubai with 136 countries participating


Major discussion point

International cooperation in cybersecurity capacity building


Topics

Cybersecurity | Development


Agreed with

– Alessandra Lustrati

Agreed on

Digital skills must include safety and security awareness


Least developed countries and small island developing states lag 10+ years behind in cybersecurity capacity

Explanation

Despite global efforts to improve cybersecurity, there remains a significant gap between developed and least developed nations in their ability to protect against and respond to cyber threats. This disparity creates vulnerabilities that can affect global digital security.


Evidence

Global Cyber Security Index shows challenges persist in least developed countries and small island developing states, which are often more than 10 years behind other developing countries


Major discussion point

Cybersecurity capacity gaps between nations


Topics

Cybersecurity | Development


L

Luzango Mfupe

Speech speed

98 words per minute

Speech length

781 words

Speech time

474 seconds

Cost of data in South Africa represents 10% of average household food budget, forcing rural families to choose between connectivity and basic needs

Explanation

Despite decreasing data costs, internet access remains prohibitively expensive for many rural households when compared to their essential living expenses. This economic barrier creates a situation where families must prioritize basic survival needs over digital connectivity.


Evidence

Cost of 1GB data is around 33 Rand (1.8 US dollars), which represents about 10% of average household food basket cost, forcing rural households to choose between buying data or putting bread on the table


Major discussion point

Economic barriers to digital inclusion


Topics

Development | Economic


South Africa has achieved 78% internet connectivity but only 14.5% have fixed internet at home, highlighting infrastructure gaps

Explanation

While South Africa has made progress in overall internet connectivity, the heavy reliance on mobile networks versus fixed broadband reveals limitations in digital infrastructure quality and reliability. This disparity affects the type and quality of digital services people can access.


Evidence

78% of population has some form of internet connectivity, mostly via mobile networks, while only 14.5% have fixed internet at home


Major discussion point

Quality and type of internet infrastructure


Topics

Infrastructure | Development


Innovation spectrum regulations around 3.8-4.2 GHz and lower 6 GHz bands can enable affordable connectivity solutions

Explanation

By opening up additional spectrum bands for innovative use, regulators can create opportunities for new, more cost-effective connectivity solutions. This regulatory approach can help reduce the overall cost of providing internet access, particularly in underserved areas.


Evidence

Working with regulator ICASA on innovation spectrum in 3.8-4.2 GHz and lower 6 GHz bands (Wi-Fi 6), building on previous success with TV white space regulations published in March 2018


Major discussion point

Regulatory innovation for spectrum access


Topics

Infrastructure | Legal and regulatory


Agreed with

– Leonard Mabele

Agreed on

Spectrum innovation and sharing can reduce connectivity costs


Spectrum costs contribute significantly to total ownership costs for operators, making dynamic sharing essential for affordability

Explanation

The high cost of spectrum licenses represents a major expense for telecommunications operators, which ultimately gets passed on to consumers. Dynamic spectrum sharing offers a way to reduce these costs by allowing more efficient use of available spectrum resources.


Evidence

Spectrum contributes immensely to total cost of ownership for wireless operators like mobile networks, leading to focus on dynamic spectrum sharing solutions


Major discussion point

Economic impact of spectrum costs on connectivity


Topics

Infrastructure | Economic


13 small-medium enterprises led by women, youth, and persons with disabilities have been supported to deploy affordable rural connectivity

Explanation

Targeted support for underrepresented groups in the telecommunications sector can create sustainable business models for rural connectivity while promoting inclusive economic development. This approach addresses both connectivity gaps and economic empowerment simultaneously.


Evidence

Since 2020, FCDO and CSER collaborated to support 13 SMMEs owned by women, youth and persons with disability across five provinces, reaching over 70,000 daily users in rural areas


Major discussion point

Inclusive business models for rural connectivity


Topics

Development | Economic | Human rights


Agreed with

– Alessandra Lustrati
– Maria Paz Canales

Agreed on

Community-based approaches are essential for sustainable connectivity


Over 70,000 rural users are connected daily through community-based initiatives with technical and business model capacity building

Explanation

Community-based connectivity initiatives can achieve significant scale when combined with proper technical and business support. The sustainability of these initiatives depends on building local capacity rather than just providing initial funding or equipment.


Evidence

Over 70,000 users in rural areas connected daily through supported SMMEs, with over 200 partners providing capacity building in technical and business models for sustainability


Major discussion point

Scale and sustainability of community connectivity


Topics

Development | Economic


L

Leonard Mabele

Speech speed

174 words per minute

Speech length

1859 words

Speech time

640 seconds

Large geographic areas in countries like Kenya require different models and approaches to reach last-mile connectivity

Explanation

The vast scale of some administrative regions, comparable to entire European countries, creates unique challenges for connectivity deployment. Traditional approaches may not be economically viable or technically feasible across such diverse and expansive territories.


Evidence

Kenyan counties are as large as Netherlands, some larger than two European countries combined, requiring different models and approaches for last-mile access


Major discussion point

Geographic challenges in connectivity deployment


Topics

Infrastructure | Development


Kenya’s National Digital Master Plan emphasizes affordable meaningful access, digital skills, innovation, and digital government services

Explanation

Kenya’s comprehensive digital strategy recognizes that connectivity alone is insufficient and must be accompanied by skills development, innovation opportunities, and accessible government services. This holistic approach aims to ensure digital transformation benefits reach all citizens, particularly in underserved areas.


Evidence

Plan focuses on four pillars: digital infrastructure for meaningful access to last mile, digital skills for underserved populations including rural youth and women, digital innovation extending beyond urban areas to rural communities, and digital government services reaching underrepresented groups


Major discussion point

Comprehensive national digital strategy


Topics

Development | Infrastructure | Sociocultural


Agreed with

– Samantha O’Riordan
– Alessandra Lustrati

Agreed on

Meaningful connectivity requires more than basic access


TV White Spaces and spectrum sharing provide opportunities to reduce connectivity costs and enable last-mile access

Explanation

Innovative use of unused television broadcast spectrum can provide cost-effective connectivity solutions, particularly for rural and underserved areas. This approach leverages existing spectrum resources more efficiently while reducing infrastructure costs.


Evidence

Communications Authority developed TV white spaces framework in 2020, leading to ongoing work with Dynamic Spectrum Alliance and FCDO on spectrum sharing opportunities including Wi-Fi 6E in 6 GHz band


Major discussion point

Spectrum innovation for affordable connectivity


Topics

Infrastructure | Legal and regulatory


Agreed with

– Luzango Mfupe

Agreed on

Spectrum innovation and sharing can reduce connectivity costs


Dynamic spectrum access and Wi-Fi 6E in 6 GHz band can enhance capacity and access for underserved communities

Explanation

Advanced spectrum management techniques and newer wireless technologies can provide better connectivity options for communities that have been historically underserved by traditional telecommunications infrastructure. These technologies offer improved capacity and performance at potentially lower costs.


Evidence

Work on Wi-Fi 6E coexistence studies in 6 GHz band (2022-2023) led to Communications Authority guidelines for lower band, with evaluation ongoing for upper band extension; includes development of non-public networks for private LTE/5G community networks


Major discussion point

Advanced wireless technologies for underserved areas


Topics

Infrastructure | Development


Many underserved areas have inadequate connectivity, with some regions only having 2G/3G access despite high population density

Explanation

Even in countries with relatively good national connectivity statistics, significant pockets of poor connectivity persist, particularly in rural areas. The disconnect between population density and connectivity quality suggests that current infrastructure deployment strategies may not adequately serve all communities.


Evidence

Recent visit to Ustia County in western Kenya showed most areas only have 3G access, with many places having only 2G/EDGE, despite high concentration of schools and potentially underreported population density


Major discussion point

Persistent connectivity gaps in rural areas


Topics

Infrastructure | Development


Beyond connectivity, rural communities need digital solutions for their specific sectors like agriculture

Explanation

Meaningful digital transformation requires not just internet access but also relevant applications and services that address local economic activities and challenges. Rural communities, particularly those engaged in agriculture, need specialized digital tools to realize the full benefits of connectivity.


Evidence

Conversations with farmers revealed need for digital solutions specific to agricultural sector, highlighting intersection of community networks and innovation to support underserved area sectors


Major discussion point

Sector-specific digital solutions for rural communities


Topics

Development | Economic | Sociocultural


A

Alessandra Lustrati

Speech speed

176 words per minute

Speech length

1577 words

Speech time

535 seconds

Digital development should support inclusive, responsible, and sustainable digital transformation across economy, government, and society

Explanation

Effective digital development requires a comprehensive approach that goes beyond just providing technology access to ensuring that digital transformation benefits all segments of society while managing risks and environmental impacts. This holistic view recognizes digital transformation as a fundamental change affecting all aspects of human organization.


Evidence

FCDO definition focuses on supporting partner countries in achieving inclusive, responsible and sustainable digital transformation across economy, government and society, using a three-pillar policy framework


Major discussion point

Comprehensive approach to digital transformation


Topics

Development | Human rights


UK’s three-pillar approach focuses on digital inclusion, digital responsibility (managing risks), and digital sustainability

Explanation

The UK’s digital development strategy recognizes that successful digital transformation must simultaneously address access barriers, manage emerging risks, and consider environmental impacts. This balanced approach ensures that digital progress doesn’t create new problems while solving existing ones.


Evidence

Policy framework includes: digital inclusion (connectivity, skills, content for underserved communities), digital responsibility (cyber security, online safety, data protection, TFGBV), and digital sustainability (environmental impact and climate solutions)


Major discussion point

Balanced approach to digital development challenges


Topics

Development | Human rights | Cybersecurity


Community networks approach starts from understanding local needs, potential, and ambitions rather than top-down technological solutions

Explanation

Effective connectivity solutions must be grounded in community participation and local context rather than imposed from external actors. This bottom-up approach ensures that technological interventions are relevant, sustainable, and truly serve community needs.


Evidence

Partnership with Association for Progressive Communication (APC) across five countries emphasizes community-based approaches that start from community needs, potential and ambitions rather than top-down technology solutions


Major discussion point

Community-centered approach to connectivity


Topics

Development | Sociocultural


Agreed with

– Luzango Mfupe
– Maria Paz Canales

Agreed on

Community-based approaches are essential for sustainable connectivity


Technology-facilitated gender-based violence requires both regulatory frameworks and user capability building, with focus on prevention rather than just response

Explanation

Addressing online gender-based violence requires a comprehensive strategy that includes legal and policy measures as well as empowering users with knowledge and skills. Moving beyond reactive approaches to focus on prevention addresses root causes rather than just consequences.


Evidence

Work includes promoting online safety through regulatory frameworks and user capabilities, with additional research on TFGBV drivers and prevention approaches, supported by global partnerships with different countries


Major discussion point

Comprehensive approach to online gender-based violence


Topics

Human rights | Cybersecurity


Digital inclusion must address not only connectivity but also digital skills, relevant content, and accessibility for underserved communities

Explanation

True digital inclusion requires addressing multiple barriers simultaneously, including not just physical access to internet but also the ability to use it effectively and access to content and services that are relevant to users’ lives and contexts. This comprehensive approach ensures that connectivity translates into meaningful opportunities.


Evidence

Digital inclusion pillar focuses on inclusive connectivity at last mile, access to relevant digital content and services for marginalized communities, and development of digital skills at different levels to make connectivity meaningful and productive


Major discussion point

Multi-dimensional nature of digital inclusion


Topics

Development | Human rights | Sociocultural


Agreed with

– Samantha O’Riordan
– Leonard Mabele

Agreed on

Meaningful connectivity requires more than basic access


Digital skills development should always include cyber hygiene awareness and online safety tools

Explanation

As people gain access to digital technologies, they must also be equipped with the knowledge and skills to use them safely. Integrating security and safety education into digital literacy programs ensures that increased connectivity doesn’t lead to increased vulnerability.


Evidence

Digital skills development trainings always build in cyber hygiene awareness and online safety tools and skills


Major discussion point

Integration of safety into digital literacy


Topics

Development | Cybersecurity | Sociocultural


Agreed with

– Samantha O’Riordan

Agreed on

Digital skills must include safety and security awareness


Supporting local tech entrepreneurship and digital economies creates sustainable models for continued development

Explanation

Building local capacity and business ecosystems ensures that digital development initiatives can continue and expand beyond initial external support. This approach creates economic opportunities while addressing development challenges through locally-relevant innovations.


Evidence

Digital Access Programme pillar three supports tech entrepreneurship in local digital economies of five focal countries, facilitating digital innovations for local development challenges and creating opportunities for business partnerships and investment


Major discussion point

Local entrepreneurship for sustainable digital development


Topics

Development | Economic


Local organizations must be prioritized in delivery models to ensure sustainability beyond external support

Explanation

Sustainable digital development requires building the capacity of local institutions and organizations rather than relying on external actors for ongoing implementation. This approach ensures that initiatives can continue and adapt to changing local needs over time.


Evidence

Delivery model gives huge priority to working with local organisations while also having global partners, with flexible and agile approach that enables local stakeholders to take forward the work through capacity building and technical assistance


Major discussion point

Local ownership and sustainability


Topics

Development


M

Maria Paz Canales

Speech speed

146 words per minute

Speech length

1176 words

Speech time

480 seconds

Effective digital transformation requires participatory processes involving local stakeholders from design through implementation

Explanation

Digital transformation initiatives are more likely to succeed and serve community needs when local stakeholders are meaningfully involved throughout the entire process rather than just being recipients of predetermined solutions. This participatory approach ensures that interventions are contextually appropriate and locally supported.


Evidence

Global Partners Digital works with partners across regions supporting participatory processes in design and deployment of digital strategies, emphasizing benefits of having local actors involved from beginning in collaboration with local authorities


Major discussion point

Participatory design in digital transformation


Topics

Development | Sociocultural


Agreed with

– Alessandra Lustrati
– Luzango Mfupe

Agreed on

Community-based approaches are essential for sustainable connectivity


Traditional marginalized communities must be meaningfully engaged to ensure technology responds to local realities and contexts

Explanation

Digital transformation can either reduce or exacerbate existing inequalities depending on whether marginalized communities are included in shaping how technologies are deployed and used. Meaningful engagement goes beyond consultation to ensure these communities have genuine influence over digital development processes.


Evidence

Need to go beyond top-down approach and learn from local communities about their needs, how they engage with technology, and how technology transforms social life at community level, particularly for traditionally marginalized communities


Major discussion point

Inclusive participation in digital policy


Topics

Development | Human rights | Sociocultural


Human rights approach requires monitoring technology’s impact and having mechanisms for course correction and oversight

Explanation

Responsible digital development requires ongoing assessment of how technologies are actually affecting people’s lives and rights, with systems in place to address problems when they arise. This approach recognizes that good intentions are insufficient without accountability mechanisms and adaptive management.


Evidence

Need for institutional and normative structures to track technology impact, mechanisms for course correction, oversight and review cycles in policy implementation, acknowledging both positive potential and negative risks of technology deployment


Major discussion point

Accountability and adaptive management in digital development


Topics

Human rights | Development


Agreements

Agreement points

Meaningful connectivity requires more than basic access

Speakers

– Samantha O’Riordan
– Leonard Mabele
– Alessandra Lustrati

Arguments

Meaningful connectivity requires safe, satisfying, enriching and productive online experience at affordable cost, not just basic access


Kenya’s National Digital Master Plan emphasizes affordable meaningful access, digital skills, innovation, and digital government services


Digital inclusion must address not only connectivity but also digital skills, relevant content, and accessibility for underserved communities


Summary

All speakers agree that true digital inclusion goes beyond providing internet access to ensuring users can have quality, relevant, and productive online experiences that add value to their lives


Topics

Development | Human rights | Infrastructure


Community-based approaches are essential for sustainable connectivity

Speakers

– Alessandra Lustrati
– Luzango Mfupe
– Maria Paz Canales

Arguments

Community networks approach starts from understanding local needs, potential, and ambitions rather than top-down technological solutions


13 small-medium enterprises led by women, youth, and persons with disabilities have been supported to deploy affordable rural connectivity


Effective digital transformation requires participatory processes involving local stakeholders from design through implementation


Summary

Speakers consistently emphasize that sustainable digital development must be community-driven, participatory, and responsive to local contexts rather than imposed from external actors


Topics

Development | Sociocultural | Human rights


Spectrum innovation and sharing can reduce connectivity costs

Speakers

– Leonard Mabele
– Luzango Mfupe

Arguments

TV White Spaces and spectrum sharing provide opportunities to reduce connectivity costs and enable last-mile access


Innovation spectrum regulations around 3.8-4.2 GHz and lower 6 GHz bands can enable affordable connectivity solutions


Summary

Both speakers from Kenya and South Africa agree that innovative spectrum management and dynamic sharing are crucial for making connectivity more affordable and accessible


Topics

Infrastructure | Legal and regulatory | Economic


Digital skills must include safety and security awareness

Speakers

– Samantha O’Riordan
– Alessandra Lustrati

Arguments

ITU has established 24 computer incident response teams and developed national cybersecurity strategies in multiple countries


Digital skills development should always include cyber hygiene awareness and online safety tools


Summary

Both speakers emphasize that as people gain digital access, they must simultaneously be equipped with cybersecurity knowledge and online safety skills


Topics

Development | Cybersecurity | Sociocultural


Similar viewpoints

Both African representatives highlight the unique challenges of their regions, including vast geographic scales and economic barriers that require innovative, context-specific solutions for rural connectivity

Speakers

– Leonard Mabele
– Luzango Mfupe

Arguments

Large geographic areas in countries like Kenya require different models and approaches to reach last-mile connectivity


Cost of data in South Africa represents 10% of average household food budget, forcing rural families to choose between connectivity and basic needs


Topics

Development | Infrastructure | Economic


Both speakers emphasize the importance of local ownership and participation in digital development, whether through entrepreneurship or community engagement, to ensure sustainability and relevance

Speakers

– Alessandra Lustrati
– Maria Paz Canales

Arguments

Supporting local tech entrepreneurship and digital economies creates sustainable models for continued development


Traditional marginalized communities must be meaningfully engaged to ensure technology responds to local realities and contexts


Topics

Development | Human rights | Economic


Both speakers acknowledge significant gaps in digital infrastructure and capacity between developed and developing nations, highlighting the need for targeted support and different approaches

Speakers

– Samantha O’Riordan
– Luzango Mfupe

Arguments

Least developed countries and small island developing states lag 10+ years behind in cybersecurity capacity


South Africa has achieved 78% internet connectivity but only 14.5% have fixed internet at home, highlighting infrastructure gaps


Topics

Development | Infrastructure | Cybersecurity


Unexpected consensus

Integration of environmental sustainability into digital development

Speakers

– Alessandra Lustrati

Arguments

UK’s three-pillar approach focuses on digital inclusion, digital responsibility (managing risks), and digital sustainability


Explanation

While environmental impact of digital technologies is often overlooked in development discussions, there was recognition that digital transformation must consider environmental sustainability alongside social and economic benefits


Topics

Development | Infrastructure


Need for ongoing monitoring and adaptive management

Speakers

– Maria Paz Canales
– Alessandra Lustrati

Arguments

Human rights approach requires monitoring technology’s impact and having mechanisms for course correction and oversight


Local organizations must be prioritized in delivery models to ensure sustainability beyond external support


Explanation

There was unexpected consensus on the need for continuous assessment and adaptation of digital development initiatives, moving beyond implementation to ongoing management and course correction


Topics

Human rights | Development


Overall assessment

Summary

Strong consensus emerged around the need for meaningful rather than basic connectivity, community-centered approaches, spectrum innovation for affordability, and integration of safety into digital skills. Speakers consistently emphasized local ownership, participatory design, and addressing the specific challenges of underserved communities.


Consensus level

High level of consensus with complementary perspectives rather than conflicting viewpoints. The agreement spans technical, policy, and social dimensions of digital development, suggesting a mature understanding of the multi-faceted nature of digital inclusion challenges. This consensus provides a strong foundation for collaborative action in digital development initiatives.


Differences

Different viewpoints

Unexpected differences

Overall assessment

Summary

The discussion showed remarkable consensus among speakers on fundamental goals and challenges, with no direct disagreements identified. The main areas of difference were in emphasis and approach rather than conflicting viewpoints.


Disagreement level

Very low disagreement level. This high level of consensus suggests either a well-aligned group of stakeholders or potentially indicates that more diverse perspectives (such as private sector, different regional viewpoints, or alternative development approaches) might be missing from the discussion. The lack of substantive disagreement, while positive for collaboration, may also suggest limited critical examination of different approaches to digital development challenges.


Partial agreements

Partial agreements

Similar viewpoints

Both African representatives highlight the unique challenges of their regions, including vast geographic scales and economic barriers that require innovative, context-specific solutions for rural connectivity

Speakers

– Leonard Mabele
– Luzango Mfupe

Arguments

Large geographic areas in countries like Kenya require different models and approaches to reach last-mile connectivity


Cost of data in South Africa represents 10% of average household food budget, forcing rural families to choose between connectivity and basic needs


Topics

Development | Infrastructure | Economic


Both speakers emphasize the importance of local ownership and participation in digital development, whether through entrepreneurship or community engagement, to ensure sustainability and relevance

Speakers

– Alessandra Lustrati
– Maria Paz Canales

Arguments

Supporting local tech entrepreneurship and digital economies creates sustainable models for continued development


Traditional marginalized communities must be meaningfully engaged to ensure technology responds to local realities and contexts


Topics

Development | Human rights | Economic


Both speakers acknowledge significant gaps in digital infrastructure and capacity between developed and developing nations, highlighting the need for targeted support and different approaches

Speakers

– Samantha O’Riordan
– Luzango Mfupe

Arguments

Least developed countries and small island developing states lag 10+ years behind in cybersecurity capacity


South Africa has achieved 78% internet connectivity but only 14.5% have fixed internet at home, highlighting infrastructure gaps


Topics

Development | Infrastructure | Cybersecurity


Takeaways

Key takeaways

Digital transformation must be inclusive, responsible, and sustainable, addressing not just connectivity but meaningful access that includes affordability, digital skills, relevant content, and safety


The global digital divide remains significant with 2.6 billion people offline, predominantly in Africa and Asia, with affordability being a major barrier (data costs can represent 10% of household food budgets in rural areas)


Innovative spectrum sharing solutions like TV White Spaces, Wi-Fi 6E, and dynamic spectrum access can reduce connectivity costs and enable last-mile access in underserved communities


Community-centered approaches that start from local needs and involve participatory design are essential for sustainable digital development, moving beyond top-down technological solutions


Cybersecurity and digital safety must be foundational and integrated into every layer of digital development, with particular attention to technology-facilitated gender-based violence and protecting vulnerable populations


Multi-stakeholder partnerships between governments, international organizations, private sector, and civil society are crucial for achieving sustainable digital transformation at scale


Human rights approaches require continuous monitoring of technology’s impact and mechanisms for course correction, recognizing that technology deployment brings both benefits and new risks


Resolutions and action items

Continue collaboration between FCDO, ITU, and local partners in the Digital Access Programme across Brazil, Indonesia, Kenya, Nigeria, and South Africa


Expand knowledge sharing of successful models and practices from the five focal countries to other regions on a demand basis


Develop dynamic spectrum access certification programs to help internet service providers understand spectrum sharing opportunities


Publish summary report of the session to the IGF website for broader community access


Continue capacity building for small-medium enterprises, particularly those led by women, youth, and persons with disabilities, in deploying affordable rural connectivity


Advance regulatory frameworks for innovation spectrum in 3.8-4.2 GHz and lower 6 GHz bands to enable more affordable connectivity solutions


Unresolved issues

How to effectively balance innovation and increasing connectivity with preventing new harms such as cybersecurity threats, disinformation, and technology-facilitated gender-based violence


Accurate population counting and mapping in underserved areas to better understand true connectivity needs and gaps


Sustainable financing mechanisms for long-term digital infrastructure development in rural and underserved communities


How to ensure consistent human rights protections across different local contexts and regulatory environments


Bridging the gap between global normative frameworks and local implementation realities


Addressing the usage gap even where network coverage exists, particularly around digital skills and trust in online services


Suggested compromises

Adopting flexible and agile delivery models that can adapt to different local contexts while maintaining core principles of inclusion, responsibility, and sustainability


Implementing both connectivity expansion and risk mitigation measures simultaneously rather than treating them as competing priorities


Combining top-down policy frameworks with bottom-up community engagement to ensure both systemic change and local relevance


Balancing support for local organizations with partnerships with global technical experts to leverage both local knowledge and international expertise


Integrating cybersecurity and digital safety training into all digital skills development programs rather than treating them as separate initiatives


Thought provoking comments

We find this way of articulating our thinking quite useful and we’ve developed this policy framework… when we think about digital transformation, we are actually referring to digital transformation typically of the economy as people think of spontaneously, but also very much of government and of society in the broad sense of the term… However, we don’t want to let’s say promote a digital transformation just for the sake of it. We want it to be inclusive responsible and sustainable

Speaker

Alessandra Lustrati


Reason

This comment reframes digital transformation from a purely technological or economic concept to a holistic societal transformation with ethical guardrails. It introduces the critical distinction between transformation ‘for its own sake’ versus purposeful, values-driven transformation.


Impact

This established the foundational framework for the entire discussion, with subsequent speakers consistently referencing and building upon the three pillars of inclusive, responsible, and sustainable transformation. It shifted the conversation from technical connectivity issues to broader questions of social impact and rights.


By meaningful connectivity it means that users have access to a safe, satisfying, enriching and productive online experience at an affordable cost… There is also a usage gap and there are many reasons why there is and still remains a usage gap and this is often the primary two of the primary reasons are down to affordability but also a lack of digital skills, awareness, knowledge, maybe local content and trust.

Speaker

Samantha O’Riordan


Reason

This comment introduces crucial nuance by distinguishing between mere connectivity and meaningful connectivity, highlighting that technical coverage doesn’t automatically translate to beneficial usage. The emphasis on trust as a barrier is particularly insightful.


Impact

This shifted the discussion from infrastructure-focused metrics to user-centered outcomes. It influenced subsequent speakers to address not just connectivity solutions but also digital skills, local content, and community engagement approaches.


To give you an analogy of affordability of data here in South Africa vis-à-vis the daily household food basket cost… So a rural household owner will have to debate whether they should buy data or put bread on the table and also afford other things.

Speaker

Luzango Mfupe


Reason

This powerful analogy makes abstract affordability concerns tangible by framing digital access as a basic needs trade-off. It humanizes the digital divide discussion and highlights the real-world constraints faced by underserved populations.


Impact

This comment grounded the technical discussion in lived reality, influencing the conversation to consider not just technical solutions but the socioeconomic context in which they must operate. It reinforced the need for innovative, low-cost approaches.


We only know the only way to be effective in responding to the needs of the local communities and the local realities and the context is to have by design the digital transformation policy being produced and discussed at the local level with the relevant actors, with the traditionally marginalized communities also because we need to go beyond the top-down approach

Speaker

Maria Paz Canales


Reason

This comment challenges the dominant paradigm of externally-designed digital solutions by advocating for participatory, bottom-up approaches. It introduces the critical concept of community agency in digital transformation.


Impact

This shifted the discussion toward governance and participation models, prompting Alessandra to elaborate on community networks approaches and reinforcing the theme that emerged throughout the session about the importance of local ownership and participation.


It makes me doubt if we really have the right figure of the global population as 8 billion or there are more people that we are not really counting as we think about this stuff… looking at the intersection of connectivity and innovation was something that struck my mind was we have an avenue to have community networks really function to deliver meaningful and affordable access in the rural areas.

Speaker

Leonard Mabele


Reason

This observation challenges fundamental assumptions about population data and connectivity statistics, suggesting that underserved populations may be systematically undercounted. It connects lived experience with policy implications.


Impact

This comment brought the discussion full circle by questioning the very data foundations upon which digital development policies are built, while reinforcing the community networks theme that had emerged as a key solution throughout the session.


Overall assessment

These key comments collectively transformed what could have been a technical discussion about connectivity infrastructure into a nuanced exploration of human-centered digital development. The progression moved from establishing ethical frameworks (Alessandra), through defining meaningful outcomes (Samantha), to grounding discussions in lived reality (Luzango), advocating for participatory approaches (Maria), and finally questioning fundamental assumptions (Leonard). Each comment built upon previous insights while introducing new layers of complexity, creating a rich dialogue that balanced technical solutions with social justice concerns. The comments demonstrated how effective multi-stakeholder dialogue can evolve from presenting individual perspectives to creating shared understanding around the need for inclusive, participatory, and contextually-appropriate digital transformation approaches.


Follow-up questions

How can we ensure that all voices, especially those historically excluded, are heard in shaping our digital future?

Speaker

Neil Wilson


Explanation

This was posed as a key question for the session to explore collaborative and inclusive solutions in digital governance


How do we connect the unconnected?

Speaker

Neil Wilson


Explanation

This addresses the fundamental challenge of reaching the 2.6 billion people still offline globally


How do we balance innovation with rights protection?

Speaker

Neil Wilson


Explanation

This explores the tension between technological advancement and ensuring human rights are respected in digital transformation


How can we build resilient, rights-respecting digital infrastructure that serves everyone everywhere?

Speaker

Neil Wilson


Explanation

This addresses the need for inclusive and sustainable digital infrastructure development


How can one reduce the cost of connectivity?

Speaker

Luzango Mfupe


Explanation

This is critical for addressing affordability barriers, especially in developing countries where data costs compete with basic necessities like food


How can we create that digital divide bridge, particularly when looking at the aspect of digital innovation in rural areas?

Speaker

Leonard Mabele


Explanation

This addresses the gap in digital innovation opportunities between urban and rural communities


What are the benefits of having participatory processes in the design and implementation of digital strategies?

Speaker

Maria Paz Canales


Explanation

This explores how to move beyond top-down approaches to ensure community needs and contexts are properly addressed


How do we prevent technology-facilitated gender-based violence rather than only dealing with the consequences?

Speaker

Alessandra Lustrati


Explanation

This addresses the need for proactive approaches to address TFGBV at its root causes


Do we really have the right figure of the global population, and are there more people that we are not counting in underserved areas?

Speaker

Leonard Mabele


Explanation

This questions the accuracy of population data used for planning connectivity initiatives, particularly in remote areas


How can community networks support innovation in underserved sectors like agriculture?

Speaker

Leonard Mabele


Explanation

This explores the intersection of connectivity and sector-specific digital solutions for rural communities


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Day 0 Event #236 EU Rules on Disinformation Who Are Friends or Foes

Day 0 Event #236 EU Rules on Disinformation Who Are Friends or Foes

Session at a glance

Summary

This Internet Governance Forum session focused on identifying allies and challenges in combating disinformation while protecting freedom of expression. The discussion brought together representatives from European institutions, fact-checking organizations, and civil society to examine the complex landscape of information integrity.


Paola Gori from EDMO (European Digital Media Observatory) opened by highlighting the dual challenge facing democracies: the spread of disinformation through various channels including AI-generated content, and the growing rhetoric against policy frameworks designed to address disinformation. She emphasized that effective responses must be grounded in fundamental rights while focusing on algorithmic transparency and multi-stakeholder approaches rather than content deletion.


Benjamin Schultz from the American Sunlight Project described the deteriorating situation in the United States, where democracy is backsliding and platforms are moving closer to the administration. However, he offered hope through recent bipartisan success in banning non-consensual deepfake pornography, suggesting that collaboration on specific issues with broad support could maintain transatlantic cooperation.


Nordic representatives Mikko Salo from Finland and Morten Langfeldt Dahlback from Norway provided regional perspectives on the challenges. Salo emphasized the urgent need for AI literacy and teacher training, particularly in trust-based Nordic societies. Dahlback raised three critical concerns: deteriorating access to platform data for research, the delicate balance between independence and government cooperation for fact-checkers, and the shift from observable public disinformation to private AI chatbot interactions that fact-checkers cannot monitor.


Alberto Rabbachin from the European Commission outlined the EU’s comprehensive framework, including the Digital Services Act and the Code of Practice on Disinformation, which now covers 42 signatories with 128 specific measures. He stressed that the EU supports independent fact-checking organizations rather than determining what constitutes disinformation itself.


The discussion concluded with recognition that the battle against disinformation is evolving from reactive fact-checking toward proactive media literacy and user empowerment, as AI makes the information landscape increasingly complex and personalized.


Keypoints

## Major Discussion Points:


– **The complexity of disinformation as a global phenomenon**: Speakers emphasized that disinformation is not a simple problem with easy solutions, involving state and non-state actors, AI-generated content, and targeting various issues like elections, health, climate change, and migration. The phenomenon creates doubt and division in society while eroding information integrity essential for democratic processes.


– **Regulatory approaches and the tension between content moderation and freedom of expression**: The discussion covered various policy frameworks including the EU’s Digital Services Act, UNESCO guidelines, and the Global Digital Compact. There’s ongoing debate about balancing disinformation countermeasures with protecting fundamental rights and free speech, with speakers noting that emotional rhetoric often overshadows factual assessment of these policies.


– **Transatlantic divergence and changing political landscape**: Speakers highlighted growing differences between US and European approaches to platform regulation and content moderation, particularly following recent political changes in the US. This includes concerns about democratic backsliding, reduced cooperation between platforms and fact-checkers, and threats to research access.


– **The shift from reactive fact-checking to proactive media literacy**: Multiple speakers discussed the evolution from traditional fact-checking and content debunking toward empowering users with digital and AI literacy skills. This shift is driven partly by the rise of AI chatbots that generate personalized responses invisible to external fact-checkers.


– **Challenges in understanding the scope and impact of disinformation**: Speakers noted difficulties in measuring the actual extent of disinformation due to limited platform transparency, reduced research access, and the complexity of distinguishing disinformation from the broader information ecosystem. This knowledge gap hampers effective policy responses.


## Overall Purpose:


The discussion aimed to examine the current landscape of internet governance and disinformation, identifying key stakeholders (“friends and foes”) in the fight against false information while exploring policy approaches, challenges, and future directions for maintaining information integrity in democratic societies.


## Overall Tone:


The discussion maintained a professional but increasingly concerned tone throughout. It began with a comprehensive, somewhat optimistic overview of existing frameworks and cooperation mechanisms, but gradually became more sobering as speakers addressed current challenges including political polarization, regulatory divergence, and technological complications from AI. While speakers acknowledged significant obstacles, they maintained a constructive approach focused on finding solutions and maintaining international cooperation despite growing difficulties.


Speakers

**Speakers from the provided list:**


– **Moderator (Giacomo)** – Session moderator organizing the discussion on Internet Governance and disinformation


– **Paula Gori** – Secretary General of EDMO (European Digital Media Observatory), the body tasked by the European Union for fighting disinformation


– **Benjamin Shultz** – Works for American Sunlight Project, a non-profit based in Washington D.C. that analyzes and fights back against information campaigns that undermine democracy; currently based in Berlin


– **Mikko Salo** – Representative of Faktabari, a Finnish NGO focused on fact-checking and digital information literacy services; part of the Nordic hub within EDMO network


– **Morten Langfeldt Dahlback** – From Faktisk, the Norwegian fact-checking organization jointly owned by major Norwegian media companies including public and commercial broadcasters; coordinator of Nordisk (the Nordic hub of EDMO)


– **Alberto Rabbachin** – Representative from the European Commission


– **Audience** – Multiple audience members who asked questions during the Q&A session


**Additional speakers:**


– **Eric Lambert** – Mentioned as being present to make the report of the session, described as “an essential figure” working “behind the scene”


– **Lou Kotny** – Retired American librarian who asked a question about EU bias regarding the Ukraine war


– **Thora** – PhD researcher from Iceland examining how large platforms and search engines undermine democracy; research fellow at the Humboldt Institute


– **Mohamed Aded Ali** – From Somalia, part of the RECIPE programme, asked about recognizing AI propaganda and digital integrity violations


Full session report

# Internet Governance Forum Session: Combating Disinformation – Identifying Allies and Challenges


## Executive Summary


This Internet Governance Forum session brought together European policymakers, fact-checking organisations, and civil society representatives to examine the evolving landscape of disinformation and information integrity. Moderated by **Giacomo**, the discussion featured perspectives from EDMO, Nordic fact-checking organisations, the American Sunlight Project, and the European Commission.


The session highlighted the complexity of addressing disinformation while protecting fundamental rights, with speakers discussing challenges ranging from AI-generated content to platform transparency and the need for enhanced media literacy. Key themes included the evolution from reactive fact-checking to proactive education approaches, concerns about research access to platform data, and the importance of maintaining independence while fostering multi-stakeholder cooperation.


## Opening Framework and Context


**Paola Gori**, Secretary General of EDMO (European Digital Media Observatory), opened by characterising disinformation as a phenomenon that “creates doubt and division in society” while eroding information integrity essential for democratic decision-making. She noted that disinformation manifests across multiple domains – elections, health, climate change, and migration – involving both state and non-state actors, including increasingly sophisticated AI-generated content.


Gori outlined EDMO’s structure as a network of 14 hubs covering EU member states, soon expanding to 15 with Ukraine and Moldova, comprising more than 120 organizations. She referenced Eurobarometer survey results showing that 38% of Europeans consider disinformation one of the biggest threats to democracy, with 82% considering it a problem for democracy.


She positioned EDMO’s approach within broader global frameworks, including UNESCO guidelines and the Global Digital Compact, emphasising fundamental rights, algorithmic transparency, multi-stakeholder approaches, and risk mitigation rather than content deletion. Gori also highlighted concerning rhetoric against policy frameworks designed to address disinformation, noting that “emotional rhetoric often overshadows factual assessment of these policies.”


## Nordic Perspectives: Trust, Education, and Evolving Challenges


The moderator **Giacomo** opened the Nordic discussion by asking whether participants were “more afraid of neighbors or supposed friends,” prompting responses about regional security dynamics.


**Mikko Salo** from Faktabari in Finland responded by referencing Finland’s 50-year history of preparedness with neighbors, then emphasised the urgent need for AI literacy, particularly in trust-based Nordic societies. He introduced the concept of “AI native persons,” questioning how people who grow up with AI will develop critical thinking skills. His central argument was that “people need to develop AI literacy and learn to think critically before using AI tools.”


Salo also raised questions about societal investment in information integrity, referencing security spending and suggesting that cognitive security deserves significant attention as part of whole-of-society security approaches.


**Morten Langfeldt Dahlback** from Faktisk in Norway identified three critical concerns challenging current approaches to combating disinformation:


First, he highlighted deteriorating access to platform data for research, noting that “major platforms are limiting researcher access to data, with research APIs being more restricted than expected.” He expressed concern that “we don’t know enough about the scope of the problem, and we don’t know enough about its impact,” while “the conditions for gaining more knowledge about this problem have become worse.”


Second, Dahlback addressed the balance between independence and government cooperation for fact-checkers, observing that “once our objectives are aligned with the objectives of governments and of other regulatory and official bodies, it’s easy for others to throw our independence into doubt, because the alignment is too close.”


Third, he identified the shift from observable public disinformation to private AI chatbot interactions that fact-checkers cannot monitor. He explained that “when you use chatbots like ChatGPT or Claude, the information that you receive from the chatbot is not in the public sphere at all,” making traditional fact-checking approaches obsolete. This led him to suggest “a transition from more debunking and fact-checking work like what we’ve been engaged in so far to more literacy work.”


## Transatlantic Perspectives and Political Challenges


**Benjamin Shultz** from the American Sunlight Project described the deteriorating situation in the United States, characterising it as “democratic backsliding” with platforms moving closer to the administration. He described the current environment as one where “bad actors are becoming more active in spreading information campaigns that undermine democracy and tear at social fabric.”


However, Shultz offered a pragmatic path forward through recent bipartisan success in banning non-consensual deepfake pornography. He argued that “small steps like these that have been taken in the states that do have broad support” could maintain transatlantic cooperation despite broader political tensions.


## European Union Policy Framework and Implementation


**Alberto Rabbachin** from the European Commission provided an overview of the EU’s regulatory approach, emphasising that European frameworks focus on algorithmic transparency and platform accountability rather than content censorship.


Rabbachin outlined the Digital Services Act as “pioneering regulation that addresses disinformation while protecting freedom of expression by focusing on algorithm functioning rather than content.” He stressed that the EU supports independent fact-checking organisations rather than determining what constitutes disinformation itself, noting that “the EU supports an independent, multidisciplinary community of more than 120 organisations whose fact-checking work is completely independent from the European Commission and governments.”


He detailed the evolution of the Code of Practice on Disinformation, which has grown from 16 signatories with 21 commitments to 42 signatories with 43 commitments and 128 measures. He announced that this code would be fully integrated into the DSA framework as of July 1st, making it auditable and creating binding obligations for platform signatories.


Regarding research access to platform data, Rabbachin acknowledged the challenges while pointing to upcoming delegated acts designed to improve researcher access.


## Audience Engagement


The audience questions revealed additional concerns within the broader community working on information integrity issues.


**Thora**, a PhD researcher from Iceland, highlighted ongoing problems with academic access to platform data, noting that “large platforms are dragging their feet on providing academic access, claiming the EU needs to make definitions first.”


**Mohamed Aded Ali** from Somalia raised questions about recognising AI propaganda and digital integrity violations, highlighting the global nature of these challenges.


**Lou Kotny**, a retired American librarian, raised concerns about potential EU bias regarding the Ukraine war, introducing questions about how fact-checking organisations maintain objectivity in politically charged environments.


## Key Themes and Challenges


Several important themes emerged from the discussion:


**Shift Toward Media Literacy**: Multiple speakers emphasised the growing importance of media literacy and critical thinking education, with some suggesting this represents a necessary evolution from traditional fact-checking approaches.


**Platform Transparency Concerns**: Both researchers and fact-checkers expressed frustration with decreasing access to platform data needed for understanding and addressing disinformation.


**Independence vs. Cooperation**: The tension between maintaining organisational independence while cooperating with government initiatives emerged as a significant concern for civil society organisations.


**AI Challenges**: All speakers acknowledged that AI is fundamentally changing the disinformation landscape, making detection more difficult and requiring new approaches, particularly regarding private AI interactions that are not publicly observable.


**Local Context**: Speakers emphasised that disinformation responses must account for local cultural, political, and linguistic contexts.


## Conclusion


The session demonstrated the complexity of addressing disinformation while protecting fundamental rights and democratic values. While speakers agreed on the importance of multi-stakeholder cooperation and media literacy, significant challenges remain around platform transparency, maintaining organisational independence, and adapting to new technologies.


The moderator concluded by noting that information integrity is becoming increasingly important and announced an upcoming workshop by BBC and Deutsche Welle. **Eric Lambert** was mentioned as the session’s rapporteur.


The discussion revealed a field grappling with fundamental changes in how information is created and consumed, particularly the shift from public, observable disinformation to private AI interactions that traditional oversight mechanisms cannot monitor.


Session transcript

Moderator: Good morning. Good morning, everybody. Thank you for being so kind to be here so early in the morning, after a long trip to here, and to be for this session that will be about, as you have seen from the title, trying to understand who are the friends and who are the foes in this very complicated and unclear situation for the Internet Governance, and especially for the fight to disinformation. It’s a session that will have some participants with me here, from Nordic countries, one from Finland, another one from Norway, but also we will have other participants from remote. will be from Brussels, we will have somebody from the European Commission and we will have somebody from the, based in Berlin at the moment, but is one of the most active person in the fact-checking and countering disinformation in the U.S., and we will have Paola Gori that will open, that is from the EDMO, this is the Secretary General of EDMO, that is the body that the European Union has tasked for fighting disinformation. So, I think that we don’t have too much time, I would prefer that we start immediately. If Paola is ready, I will give the floor to her. Hello, good morning. Are you ready? Yes, she’s with us. Welcome, Paola. You look frozen.


Paula Gori: I guess you’re hearing me and are sharing also a quick presentation. Can you hear me? Yes, we can hear you. Can you hear me well? Yes, well, but we don’t see the presentation yet. She is coming. I see it on my screen, so just let me know when you see it. Yes, now we can see the first slide, it’s okay. Okay, great. Thank you very much, Giacomo. And good morning, everyone. I’m very happy to be in this, to start the day, actually, this day zero with this session. As Giacomo was saying, the overall IGF focus is on internet governance and within this topic, of course, disinformation is creating quite a few, quite a lot of, if you want, emotional reactions around. Also, in past editions of the IGF, and I think everybody here, what I just wanted to bring up here today, again, is this situation. which we are aware on one side, we have the spread of disinformation now. Be it from internal or from external actors, they are called different policies, big migration, climate change, elections, of course, health. They’re often very linked. For example, the same goes, for example, with disinformation and so on. And it can be spread internally and externally and by internal and external actors. It can be state-backed, it can be also not state-backed. There can be the use of proxies. There can be the use of artificial intelligence, both to generate content, but also to spread it. These are all things that I think we all know, as well as the fact that disinformation is there in a broader mission of creating doubt, creating division in our society, put us in a situation in which at a certain moment, we don’t actually are in a position to really be sure about stuff because we got so many information with so different facts or non-facts, actually. And this puts us in a very difficult situation overall. And this erodes, of course, the information integrity. Information integrity is key in a democratic process because, let’s put it in a very simple and easy way, if we want to take any decision, we have to have a basis on which we can make this decision. So if the basis is actually not based on facts, then we are in a situation in which we may make a decision which is not in our interest in the end. On the other side, what we are seeing more and more is a huge rhetoric against any policy framework that tries to tackle disinformation. One of the main arguments at the basis of this is the fact that it may violate freedom of expression, which if you look at it from a very neutral point of view, it’s a very fair concern because it is very important that whichever policy that deals with disinformation respects fundamental rights. fundamental rights and also freedom of expression, but the rhetoric that we’re seeing there is actually more if you want an emotional one rather than a rhetoric which actually looks at the real framework and then actually does a real assessment of whether freedom of expression is violated or not, because very often actually it is not. And the two reinforce each other. And this is something we are seeing globally. So I’m just setting the scene in a very global way. What we are seeing as approaches, and of course EDMO, and I will say a few words about EDMO. Those who are familiar with the IGF are also familiar with EDMO so far, because it’s not the first session we’re having here, is that whichever response to get back to information integrity starts of course with digital literacy, media literacy, with strengthening quality journalism and so on. And if you look at the global frameworks that we have around, like the Global Digital Compact, the guidelines by UNESCO on the governance of digital platforms, the recent communication by the High Representative and the European Commission on the communication on international digital strategy for the EU, and also the Digital Service Act, which is a regulation, there are a few elements which are common there, which are the fact that any response, as we were saying before, has to be grounded on fundamental values and the respect of human rights. We cannot transcend from it. It has to happen. The focus is rather on algorithmic and transparency. There should be a multi-stakeholder approach. This is, I think the IGF is actually one of the responses to that, right? So it’s really a multi-stakeholder level. It is based on risk mitigation, which means that it looks at the risk, at the way that the platforms, for example, or some online actors work, could have on certain elements, for example, public health, minors, civic discourse, and so on. So just to remind us that the focus is not on we delete content, we look at content, but rather on… we look at if the way the platforms work can actually be abused for malign purposes. So you just wanted to set the scene in highlighting these differences. And the instruments that I was mentioning earlier, I think show that we are all going into that direction so that the global principles overall are those. And then of course, the regional specificities, they rightly so also have differences. And this is normal. I don’t think we will ever, ever get to something which is global in this sense, but this is fine. As long as the principles are shared and the principles are all agreed, then I think that it is important to keep regional specificities also because, especially when it comes to this information, it is a global phenomenon, but the local characteristics are playing quite a strong role. Now, I will not go into this slide, but I just wanted to show these two slides. This is one is climate change is information. The next one will be on the economics of this information. Just to show how complicated it is to navigate the disinformation sphere. It’s not just one problem that is easy to understand and with an easy solution. This makes it very complicated, but probably also very interesting for everybody involved to try to address it. And I will not go through it as I was saying, but just, I wanted to just. So with this, in the interest of time, I will just, sorry? Can you repeat the last phrase, you break up? Yeah, sorry. So I just wanted to say that I was showing these slides, not to go through them because we don’t have time, but just to show how complex the disinformation phenomenon is. And by consequence, how complex it is also to find a solution. So I think it’s not by chance that it is years and years that we’re all together sitting, also sometimes disagreeing in trying to find a solution because the problem itself is complex and we cannot always simplify. complex situations, like in the case of disinformation, and you cannot simplify it precisely because human rights are at stake. So before giving the floor to our next panelist, I just wanted to recap for those who are not familiar with Edmo, what is Edmo doing and why was I showing all this complexity? Because the complexity brings us to a situation in which we have to understand properly the phenomenon in order to come with solutions, and the solutions cannot be just one solution, it’s a mix of different solutions. And what Edmo is doing, Edmo is funded by the European Commission and is one of the pillars of the response to disinformation, it’s precisely that. We are a sort of a platform that brings together the different stakeholders, it’s sort of like what the IGF is doing more generally on internet governance, we are bringing them all together. When possible, we are trying to provide tools like trainings or like repositories of fact-checking articles and so on. And by putting the community together, we are also in a position to find common trends, to do investigations, to do joint media literacy initiatives, to do policy analysis. So how are we doing it? Just to say that we have an Edmo platform, if you want, which goes EU-wide, and then we work with 14 hubs, which are national or multinational, they cover all EU member states. And these are key, because you remember what I was saying at the beginning, we cannot avoid looking at the local specificities when it comes to disinformation. Very easily said, the culture, the policy, the politics, the history, the language, the media diet of a country are actually having an impact on whether disinformation is impactful or not, if it is entering a country or not, and so on. So we really need the local element to be there, otherwise we would miss part of the picture. These hubs working all together under our coordination also allow us, as you can imagine, to do pan-European analysis, pan-European comparison, and so on. So I hope I was… clear enough to somehow set this scene. I started with the global element and then I focused a little more on the EU, and our next speakers will continue in this sense, and I think I can give it


Moderator: over to Benjamin Schultz. Thank you very much, Paula. Yes, from Europe you make a very comprehensive panorama. Now we go to the US. Benjamin, American Sunlight, can you introduce yourself? Yes. Am I coming through clear? No, no, you can go now. Oh, okay. Yeah. Is the audio okay? Yes,


Benjamin Shultz: please. Wonderful. Well, thank you so much, Giacomo, Paula. I saw Miko and Martin there on the screen. It’s great to be back here with you all at the IGF. This is a really wonderful gathering and I think a great place for dialogue, for understanding, for discussing the issues of the day and really remembering just how global and borderless and connected the internet makes us all. And of course, that leaves ample opportunity for bad actors to misuse the internet and all of its wonderful technologies to spread disinformation. My name is Ben. I work for the American Sunlight Project, a non-profit based in Washington, D.C., although I’m based in Berlin at the moment. And we analyze and fight back against information campaigns that undermine democracy and pollute our information environment. It’s no secret that in the US a lot has changed in the last six months. Things have shifted. We’ve noticed. Things have shifted greatly. And we’ve seen, just putting it frankly, democracy begin to backslide in the United States. We’ve seen bad actors become more active than ever in spreading information campaigns and using information operations to tear at the social fabric of the US. And we’ve also seen the platforms move closer and closer to the administration. really in a total sea change from the last four years and even the four years before that. We’ve seen people be denied entry to the US based on having critical text messages of the administration, something that really as an American, I thought I would never see happen to my country. And so in this day and age in which content moderation, the removal of harmful or illegal content online is being equated falsely to censorship, to a violation of the right to free speech, free expression, in order to really make progress on making our internet safer and continuing the work that we all do, we have to really start to reframe how we approach this. We have to start to think about new ways, new creative ways to maintain the alliance, the Transatlantic Alliance in these rough times. And so in the preparatory call that we all hopped on for this panel, I was told not to be so negative. So I’m gonna cut myself off there on the bad and we’re gonna shift to the good. And I’m gonna tell you all kind of how I’m approaching this reframing. As someone working in this space, someone whose organization has been called evil by a certain person that runs X and so forth, there’s some work we can do, I think, to maintain the progress that we’ve made in making the internet a safer, better place. Recently in the US, non-consensual explicit deepfakes, colloquially known as deepfake porn, have actually been made illegal. And this is a really groundbreaking achievement, advancement in our country. And it’s something that we’ve done a lot of advocacy work for a really long time. And finally, just in the last months, we had enough votes in Congress to make this happen. And this achieved wide bipartisan. support. And the way that we framed this is we actually showed Congress just how affected by this problem they were too. A lot of times our elected officials, you know, putting it frankly, maybe aren’t keeping up as in the weeds as we are with all of the things happening online. You know, they’re busy people, fair enough. And one action that we took was we wrote a report in which we laid out just in very plain terms how Congress was being affected by this problem, how people online of all ages, particularly young women, were being affected by this problem of being depicted in deepfakes. And we were able to push a bill over the finish line and it was signed recently. And now platforms have to take down deepfake videos after receiving a request from a victim within 48 hours. You know, there’s been plenty of criticism of this bill. It’s not perfect, but it was a really, it’s been a big step forward. And I think we’re going to get into a little bit more of this later on, on this panel, on, you know, the varying degrees of regulation in different European countries. Of course, Europe is a big continent. The EU is big 20, 26, 7, you know, plus a few more in the EEA member states. And there’s a lot of conflicting values and arguments around regulating content online. But my hope amidst all of the not-so-nice things happening in the U.S. right now and the, you know, unfortunate degradation of the transatlantic relationship, my hope is that with small steps like these that have been taken in the states that do have broad support, such as banning explicit deepfakes that are made non-consensually, my hope is that collaborating on these issues that Europe and the U.S. and countries all around the world can continue the dialogue and continue to make some progress on keeping the internet safe and making it safer. And so with that, I will stop myself and pass. it back to you, Giacomo, and the panel can continue and I’m sure we’ll have some good discussion coming up. Thank you very much, Benjamin. Just one question, you moved to Berlin before November or after November? I moved in January. The timing just sort of worked out, but you know.


Moderator: Very timely. Yeah. I can understand. Okay, thank you. I think that, I hope that we will have time for questions. I remember that there is a mic over there. As soon as we finish with the presentation, we will discuss with the audience, because I think there are questions that are coming. So, who’s next? Okay. Mikko, please introduce yourself. Thank you. You are one of the members


Mikko Salo: of the network that Edmo just presented us. So, my name is Mikko Salo. I’m representing a Finnish NGO around fact-checking and digital information literacy service, Faktabari. We are part of a Nordis, that is part of the Edmo, a kind of Nordic hub, and we’re working with Morten on that one. I probably kind of opened it up a little bit, my angle, civil society point of view, whereas I understood Morten is more like the journalistic side that we are working on. But yeah, indeed, very, very challenging times. We started 11 years ago, it was still like accuracy. I think now it’s more about the integrity of the information, and when you are coming from a country with a Finland that is now praised for its preparedness culture, so I try to phrase it like, where do we need to prepare now? And I think it’s very much to the information integrity and the kind of AI literacy that is very, very urgently needed. And there, our small NGO has been working with actually government officials, pushing them to get the kind of, retrain the teachers, and then providing guidance to teachers that are very lost, of course, with the AI at the moment. And why am I so worried as an organization, starting from fact-checking, is that what is happening to our information, what is happening to our sources? Do people really anymore know where the information stems from, and what kind of consequences it has, especially in trust-based societies like the Nordic societies? And so, these are big challenges. But what gives me some hope is that I can say that we are happy to be part of the EU context, that there is at least some sort of rulebook for the internet that is badly broken. There is like a raising awareness that we need to kind of know something. I think, as we’re speaking, they are currently in Hague actually framing what is security at the moment, and I would talk about the cognitive security at the moment. And then we are talking about the famous five percent of investment. investment to security, but now what I’m referring to is the 1.5, the whole of society’s security and the information integrity. And I think that’s the frame that we should be talking. In general, the media education investments are all over the world pretty non-existing at the moment. So there is a lot to improve, at least at the moment. Finland is apparently performing the best as we are doing it. If I would invest something now, and what we are trying to do in Finland is exactly going back to the basics, is still the children, the next generation. I think that’s where we have to find some sort of protection and ensure that before they first need to… I mean, this sounds kind of crazy, but they need to be able to think before they use AI. And I was just framing and I was actually asking the chat, how does it look like an AI native person? Because if we are not able to think ourselves, we are not able to use the AI as it’s meant at the moment. So I would perhaps leave you with these thoughts about the importance of the education and the possibilities that we have in empowering the teachers in different societies to at least address the youngsters for the information integrity. Thank you. Thank you very much, Mikko. Are you more afraid of your neighbours or your supposed friends? We are not afraid of our neighbours. We are prepared and there is a 50 years of history of that one. But we are, I mean, everybody has a lot to do with this information side and it’s very mental, so to say. And I think nobody’s too prepared for that one. And this is a new battlefield and we just need to take it calm and try to progress. And that’s why the IGF is doing very important work to keep a kind of internet somehow in place.


Moderator: Thank you very much. So before to give the floor to Morten, that is the next speaker, I want to remember that we have with us also Eric Lambert that will make the report of this and it’s an essential figure. He’s not with us, but he’s behind the scene. Morten, your organisation is partially also owned by the National Public Service Broadcaster.


Morten Langfeldt Dahlback: Among others, yes. So my name is Morten. I’m from Faktisken, the Norwegian fact-checking organisation. We’re jointly owned by all of the major media companies in Norway, including the public broadcaster and also the commercial public broadcaster, yes. So I’m going… I’m going to talk about three issues that I think are important in this context. So I’m both part of Faktisk, the fact checker, but I’m also the coordinator of Nordisk, so the hub of Edmo that Mikko and Fakta Barri is also part of. And the first point I want to raise is that we talk about disinformation and misinformation here. I think one of the core challenges that we face in responding to this problem is that we don’t know enough about the scope of the problem, and we don’t know enough about its impact, either at least in a lot of domains. And I think the conditions for gaining more knowledge about this problem have become worse over the past few months. So the reason why it’s becoming worse is because of regulatory divergence between Europe and the US. So up until about a year ago, several legislations came into being which were supposed to increase transparency from major tech platforms, forcing them to provide more information to independent fact checkers, but also to researchers. And I think this is one. Except one. Except one, of course. The legislation was supposed to apply to all of them, but X didn’t refuse to be part of the legislation. That is correct. But we see that there were already this last year, there were some, or this year, there were some signs that things were deteriorating when MEDA closed down the fact checking program in the US. And we were expecting them to do so in Europe as well. That hasn’t happened, fortunately. But we think these programs that allow us to gain more knowledge about the disinformation phenomenon are probably under threat, which is going to make our life more difficult. But there is a different problem here as well. It’s very hard to, because of the wealth of information that is online in the first place, it’s very difficult to estimate the scope of disinformation there. So you can see when Paula, for example, shows you a model of the disinformation phenomenon, it’s very complex. It has a lot of variables. And it’s very difficult to disentangle just the overall composition of platforms, the algorithms there from disinformation, misinformation specifically. So I think it’s become more difficult to obtain knowledge about this phenomenon. And that hampers the size, the scope of our response. So I think we have a fundamental problem there. It’s probably solvable, but it’s something that worries me. The second thing I want to address is the relationship between policymakers and political bodies and independent actors, like Faktisk, for example, and like Faktabari, now that disinformation and misinformation is, to a greater extent, on the political agenda. So I think, overall, it’s a good thing that both governments and the European Union and others are attempting to limit the impact and the spread of disinformation. But it also places independent actors in a difficult position, because we need to be and maintain our independence from governments and from regulatory bodies in order to do our job and to maintain the trust of our audience. And once our objectives are aligned with the objectives of governments and of other regulatory and official bodies, I think it’s easy for others to throw our independence into doubt, because the alignment is too close. And this is, I think, a very important problem. I think something that both we as fact checkers and as hubs of Edmo, but also the political bodies need to work out over the next couple of years to figure out what would be the right kind of cooperative coexistence between journalistic organizations that have been at the forefront of the battle against disinformation for years and governmental bodies as well. I think it’s a difficult challenge, but it’s one that we are in the process of addressing. The final point I want to address has to do with something that Mikko just mentioned, which is he asked that GPT to give him some information that would be relevant, pertinent to this session. And I think this to me raises the challenge that we may, when we talk about mis- and disinformation, may be fighting yesterday’s battles. Because up until now, the way we have related to mis- and disinformation, both as consumers, accidental consumers maybe, but also as organizations that try to address it as a problem, is that we know that the disinformation and misinformation that’s out there is usually observable from the outside. And that means that we can see posts on Facebook. We can see videos on TikTok. They might be algorithmically delivered to individual people on their private feed, but the content is out there in the open. However, when you use chatbots like ChatGPT or Clod, which is you can use whichever you want, the information that you receive from the chatbot is not in the public sphere at all. It’s a response generated on the basis of a prompt that you give to the language model, which means that we, as fact-checkers, for example, are unable, we can’t see what responses you’re getting. And the more information consumption is driven into chatbots, the less we will be able to observe the misinformation out there, and the less able we will be to respond to it as well. So I don’t have a solution to this. I think what’s going to happen if this development accelerates is that literacy and information literacy will be much more important than it is today, because it will be up to the individual consumer and the individual user of chatbots and LLMs to actually assess the information that they’re being provided. So I think we might see a transition from more debunking and fact-checking work like what we’ve been engaged in so far to more literacy work, and really empowering people to think critically about the outputs of chatbots, for example. So I’m going to close there. I think we will see some big changes in the battle against misinformation in the coming years, but it really depends on both the regulatory divergence between the US and Europe, but also the AI development and usage of AI in the general public. Thank you.


Moderator: Thank you very much. I think that this last thing that you said are food for thought, so we need to reflect on that. But who has to reflect more is probably the European Commission that is with us in the form of Alberto Rabacin. This shift from the fact-checking to media literacy and empowerment of the users. You agree with that?


Alberto Rabbachin: Thank you, Giacomo. for this question. Indeed, this is, I hope you can hear me well. Yes, we can hear you well. This is certainly a shift that is happening and we are acknowledging that. And I would like to make, show you a few slides that I have prepared to accompany my presentation. Just give me a second that I make this happening. Yep. You should be able to see it. Yes, it’s coming. Okay. Still black, but we hope that we’ll see it in a second. Yes, now we can. Okay. So yes, indeed. So what do we, from the European Commission point of view, what we have in place, you know, is a framework which is quite a richer framework trying to, you know, preserve the integrity of the information sphere. It’s not necessarily, it’s a problem of content but it’s also a problem of functioning of the information ecosystem, of the digital information ecosystem. And first of all, I think we have to make sure that also the citizen that, the European citizen are also themselves considering disinformation and misinformation and information integrity as an issue, you know, as a problem, as a challenge. And in fact, the latest Eurobarometer survey from 2023 and 2024 had made the head of the European election had shown that 38% of the European consider, you know, disinformation, misinformation, one of the biggest threat to democracy. There is really also recently 82% of the Europeans consider that this information is a problem for democracy and they are aware, most of them are aware of this problem. So we are doing something that is perceived as useful by the citizen and also where we have to look at when we try to address the disinformation phenomenon. Certainly also from the citizen point of view, social media, online social network are the sources of the problem, the biggest source of the problem. And this also reflects the technological development that we have witnessed in the last 10 years, where the digital online information ecosystem became the main source of information. Of course, you mentioned also, some of you mentioned also, the role of AI and certainly the use of AI. Of course, AI opens a lot of opportunities in all sectors, but can also be used for malicious activity. And also thanks to Edmo, we are currently monitoring the amount of disinformation that is linked to content that has been generated by AI. And we see that this type of content is taking up. And we have witnessed this in particular in the latest national election in Europe. But what is the EU doing? First of all, we are working with partners among EU countries, with other countries outside the European border and with international organizations, and we are very happy. happy to be here talking about this important subject. Of course, there is also a very important mission, which is rising awareness and communicating about this phenomenon. I think Edmo is doing a great job with his network to also inform the citizen on the different forms that this phenomenon can take. Of course, we are also promoting access to independent media, to fact-check content. We support media literacy activity. And then we also foster this, in particular, around the Code of Conduct on Disinformation. We foster this cooperation between social media platform and civil society organization. Last but not least, of course, there is a pioneering regulation, which is the Digital Services Act. The Digital Services Act is the first global legal standard for taking disinformation, while protecting freedom of expression and information. This regulation does not look at content, but looks how the content is distributed based on, looks at the functioning of the algorithm, looks at avoiding that malicious actor abuse this algorithm to spread disinformation, to manipulate public discourse, to create different systemic risk. It gives to the Commission strong investigatory powers, which is also helping increasing transparency on the functioning of social media platform. Then we have, I mentioned, the Code of Conduct on Disinformation. The most recent development is that the Code of Practices on Disinformation has now been brought within the co-regulatory framework of the DSA. So it becomes a meaningful benchmark. for a very large online platform to fulfil the DSA requirement from, of course, the disinformation point of view. It contains a large set of commitment and measure. And then there is the third pillar, which is societal resilience. I will put AdMob under this basket. As I said, AdMob is a great tool that we support to increase awareness about the phenomenon of disinformation through the detection and the analysis of it. We have supported also the creation of the highest ethical and professional standard in the fact-checking for fact-checking in Europe. And we finance a lot of media literacy activities. This is a little bit of story of the code. We started back in 2018 with 16 signatory and 21 commitments. Now we are in 2025, 42 signatory with a very granular code that includes 43 commitments and 128 measures. As of the 1st of July, the code, as I mentioned before, we fully enter into the DSA framework and will be auditable. So this is also the big transformation that we are doing with this moving the code under the DSA. So the signatory of the code will need to be audited on the implementation of the code. This will be an obligation under the DSA. I’m not spending a lot of words on the code because maybe people are familiar, but the code wants to take several areas that are relevant for the disinformation phenomenon. The monetization of disinformation, transparency of political advertising. We also have new regulation coming into place, reducing manipulative behavior, empower user, empower fact-checkers, and provide access to data for research purposes. And then I’m concluding. you know, you have seen and it’s really a pleasure to see that in this panel there are a lot of admiral representatives. It was a huge effort from our side to create this network of 14 hubs, soon to be 15. We will have a new hub coming up which aligns to the new strategy for international cooperation of the European Union. We will have a new hub that will cover also Ukraine and Moldova, which are a critical regional spot if we want to fight disinformation. And let me also remind that maybe it’s not clear to everyone how big is this network. EDMO includes more than 120 organizations across the EU, including Norway and soon also Ukraine and Moldova. And last but not least, you mentioned it at the beginning, Giacomo, media literacy. Media literacy is an aspect that appears in different parts of our strategy. It is a part of our policy and regulatory framework, both in the DSA and in the European Media Freedom Act. We have a media literacy expert group. We also have the new European Board for Media Services that has a subgroup on media literacy. EDMO is doing great activities, in particular at the local level, with initiatives that are tailored to the needs of the different member states, and in particular to Creative Europe. And through pilot projects, we support a lot of cross-border media literacy activities. I will stop here and give you back the floor. Thank you very much, Alberto. We are quite late, but I don’t want to spoil the audience from the possibility to raise questions. I see that already there is somebody there. Could you introduce yourself, please? Yes, my name is Lou Kotny.


Audience: I’m a retired American librarian over here for my younger Norwegian-American children. On LinkedIn, I have a white paper about the Ukraine war titled Biden-Blinken’s War Beginning Holocaust Objective Facts Footnoted. And two big lies are being pushed by the European Union, by Europeans. First of all, Kyiv 2014 was an outside agitated coup for four objective reasons, which I put in my paper. Secondly, the attack in 2022 was provoked by Zelensky himself, pumped up by the Europeans in Munich, threatening Ukraine, getting nuclear weapons. And finally, which really concerns me, Europe is voting against the annual United Nations anti-Nazi resolution, which sort of is self-defining, self-incriminating that we are quizzling collaborators. Now, my question is, if the EU is so bias, pro-war biased, shouldn’t the United Nations keep it at far arm’s length as far as judging what’s misinformation and disinformation? Thank you for letting me ask my question. Thank you. Other questions from the room? Okay, in the meantime, Alberto, do you want to answer to this first question while, oh, yes, please, go ahead. There’s a second question. Hi. My name is Thora. I’m a PhD researcher from Iceland examining how very large platforms and search engines are undermining democracy. I am asking about academic access because this is a big problem, and I’ve been a research fellow at the Humboldt Institute where they have Friends of the DSA, which is a group of academics who are trying to gain this access, but the large platforms are dragging their feet and claiming that EU has to make a few definitions in order for this to start, and I’m wondering what is the status of academic access, and what should we start with? Thank you. Thank you very much. Okay. Do you want to answer to this, and then Alberto will give the other question?


Moderator: Yes, I can just echo what was just said from the audience.


Morten Langfeldt Dahlback: We recently tried to run a project where we were supposed to work with researchers to extract information from one of the major platforms, and we noticed very quickly that the research APIs where you can actually extract information was much more limited than we had expected. So, I think this is a major problem that a lot of people experience, and it definitely


Moderator: has not been fixed yet. Okay. So, Alberto, do you have some element of answer to the first question? And probably you have to complement what has been said about access to the data from the platforms. That is essential for understanding what happens.


Alberto Rabbachin: Yes, Giacomo. On the first question, and this is an important element that I want to stress, when we talk about detection of disinformation, analysis of disinformation, We don’t want to be the one calling the shots. We are supporting an independent, multidisciplinary community, which is represented by EDMO here, 120 organisations, which are selected by independent experts. And the work that they do in fact-checking and analysing this information is completely independent not only from the European Commission, but also from other EU governments. And this is really something that we really are taking care of and we want to be preserved. On the second question, I think there is the Digital Services Act that obliges platforms to provide data for research activity. There is an upcoming delegated act that should also move the bar up or down, let’s say, in terms of providing more access to researchers in Europe for doing their work.


Moderator: I think this is fundamental to have a better understanding of the phenomenon and therefore to design proper policy responses. Thank you very much. I think that we’ve run out of time. There is one more question. Yes, please. Thank you very much.


Audience: My name is Mohamed Aded Ali. I’m from Somalia. I’m part of the RECIPE programme. Recognising artificial AI propaganda in terms of digital integrity violations involves identifying when AI technologies are misused to deceive, manipulate or misinform individuals or groups. These violations can threaten trust, prosperity, ethical standards and digital communication. My question is, how can EU rulers recognise this in terms of internet integrity? Thank you.


Moderator: In terms of the internet? Internet and digital integrity, based on EU rules. I think that we can give a generic answer, that is, that the information integrity becomes now more and more, as Mikko said before, the relevant point, because especially we will have to face, thanks to the artificial intelligence, a flood of disinformation. automatically made and so becomes more and more important to identify which are the reliable sources and if the information has been manipulated and this according to what Martin was saying before will become more and more difficult. So a mix of rules as European Union is trying to do and work on the on the media integrity made by the media and the journalists is absolutely essential to try to to face this unpredictable future. Thank you very much. Sorry that we didn’t gave you too much answers but we share with you a lot of questions but this is the times we are living and we hope that in the next days we can we can find some other answers from other partners. I just remember you that in few minutes we’ll start in workshop room number two a seminar by BBC and Deutsche Welle that is about how the public service could remediate to part of the problems that we have faced this morning. Thank you very much everybody for participating and I wish you a nice IGF and thank you for coming again. Thank you. Thank you all. Thanks. Thank you.


P

Paula Gori

Speech speed

172 words per minute

Speech length

1567 words

Speech time

545 seconds

Disinformation creates doubt and division in society, eroding information integrity essential for democratic decision-making

Explanation

Gori argues that disinformation puts society in a situation where people cannot be sure about information due to conflicting facts and non-facts. This erosion of information integrity is problematic for democracy because decision-making requires a factual basis, and without it, people may make decisions not in their interest.


Evidence

Examples given include disinformation on migration, climate change, elections, and health topics that are often interconnected


Major discussion point

Information integrity as foundation for democracy


Topics

Human rights | Sociocultural


The disinformation phenomenon is extremely complex with multiple variables, making it difficult to find simple solutions while protecting human rights

Explanation

Gori emphasizes that disinformation cannot be simplified because it involves many complex factors and human rights are at stake. She argues that the complexity requires a mix of different solutions rather than a single approach.


Evidence

References to slides showing climate change disinformation and economics of disinformation to demonstrate complexity


Major discussion point

Complexity of disinformation requires nuanced solutions


Topics

Human rights | Sociocultural


Agreed with

– Morten Langfeldt Dahlback
– Alberto Rabbachin
– Mikko Salo
– Moderator

Agreed on

AI is creating new challenges for disinformation detection and response


Global frameworks emphasize fundamental rights, algorithmic transparency, multi-stakeholder approaches, and risk mitigation rather than content deletion

Explanation

Gori outlines that international frameworks like the Global Digital Compact and UNESCO guidelines focus on protecting human rights, ensuring algorithmic transparency, involving multiple stakeholders, and mitigating risks. The approach targets how platforms work rather than directly removing content.


Evidence

References to Global Digital Compact, UNESCO guidelines on digital platform governance, EU’s international digital strategy, and Digital Service Act


Major discussion point

Framework approaches to disinformation


Topics

Legal and regulatory | Human rights


EDMO serves as a platform bringing together different stakeholders, similar to IGF’s approach to internet governance

Explanation

Gori describes EDMO as a multi-stakeholder platform that brings together various actors to address disinformation, providing tools like training and fact-checking repositories. It operates through 14 hubs covering all EU member states to address local specificities.


Evidence

EDMO works with 14 national or multinational hubs covering all EU member states, funded by the European Commission


Major discussion point

Multi-stakeholder cooperation in combating disinformation


Topics

Legal and regulatory | Sociocultural


Agreed with

– Alberto Rabbachin
– Moderator

Agreed on

Multi-stakeholder approach is essential for addressing disinformation


Local specificities in culture, politics, and language are crucial for understanding how disinformation impacts different countries

Explanation

Gori argues that culture, policy, politics, history, language, and media consumption patterns of a country significantly impact whether disinformation is effective or enters a country. This necessitates local elements in any response strategy.


Evidence

EDMO’s structure with local hubs to address regional specificities while enabling pan-European analysis


Major discussion point

Importance of local context in disinformation response


Topics

Sociocultural | Legal and regulatory


Agreed with

– Mikko Salo

Agreed on

Local context and specificities are crucial for effective disinformation response


M

Morten Langfeldt Dahlback

Speech speed

182 words per minute

Speech length

1146 words

Speech time

376 seconds

There is insufficient knowledge about the scope and impact of disinformation, and conditions for gaining this knowledge are deteriorating

Explanation

Dahlback argues that understanding the scope and impact of disinformation is limited, and the situation is worsening due to regulatory divergence between Europe and the US. He notes that legislation meant to increase platform transparency is being undermined.


Evidence

META closed down fact-checking programs in the US, X refused to comply with transparency legislation, and research APIs are more limited than expected


Major discussion point

Knowledge gaps about disinformation scope and impact


Topics

Legal and regulatory | Sociocultural


Independent actors like fact-checkers face challenges maintaining independence from governments while their objectives align with official bodies

Explanation

Dahlback highlights the difficulty fact-checkers face in maintaining independence and audience trust when their objectives align closely with government goals. This alignment can lead others to question their independence.


Evidence

The challenge of cooperative coexistence between journalistic organizations and governmental bodies in addressing disinformation


Major discussion point

Independence of fact-checking organizations


Topics

Human rights | Sociocultural


Disagreed with

– Alberto Rabbachin

Disagreed on

Approach to combating disinformation: regulatory vs. independence concerns


The shift toward AI chatbots creates invisible information consumption that fact-checkers cannot observe or respond to effectively

Explanation

Dahlback warns that as information consumption moves to private chatbot interactions, fact-checkers lose the ability to observe and respond to misinformation. Unlike social media posts that are publicly observable, chatbot responses are private and generated individually.


Evidence

Comparison between observable content on Facebook and TikTok versus private responses from ChatGPT and Claude


Major discussion point

AI chatbots creating invisible misinformation


Topics

Sociocultural | Legal and regulatory


Agreed with

– Paula Gori
– Alberto Rabbachin
– Mikko Salo
– Moderator

Agreed on

AI is creating new challenges for disinformation detection and response


There may be a necessary shift from fact-checking and debunking work toward more literacy work and empowering people to think critically

Explanation

Dahlback suggests that as AI-generated content becomes more prevalent and less observable, the focus should shift from reactive fact-checking to proactive media literacy. This would empower individuals to critically assess information they receive from chatbots and other AI tools.


Evidence

The increasing use of chatbots and LLMs making traditional fact-checking approaches less effective


Major discussion point

Evolution from fact-checking to media literacy


Topics

Sociocultural | Human rights


Agreed with

– Mikko Salo
– Alberto Rabbachin
– Moderator

Agreed on

Media literacy and education are fundamental to combating disinformation


Major platforms are limiting researcher access to data, with research APIs being more restricted than expected

Explanation

Dahlback reports that recent attempts to work with researchers on extracting platform information revealed that research APIs provide much more limited access than anticipated. This restricts the ability to study and understand disinformation phenomena.


Evidence

Direct experience from a recent project attempting to extract information from a major platform


Major discussion point

Platform data access for research


Topics

Legal and regulatory | Development


Disagreed with

– Alberto Rabbachin
– Audience

Disagreed on

Platform data access and transparency


B

Benjamin Shultz

Speech speed

161 words per minute

Speech length

908 words

Speech time

336 seconds

Bad actors are becoming more active in spreading information campaigns that undermine democracy and tear at social fabric

Explanation

Shultz describes how information operations are being used more aggressively to damage democratic institutions and social cohesion in the US. He notes that platforms are moving closer to the administration and that there are concerning restrictions on free expression.


Evidence

People being denied entry to the US based on critical text messages about the administration, and platforms aligning more closely with government


Major discussion point

Increasing threats to democracy from information campaigns


Topics

Human rights | Sociocultural


Small legislative victories like banning non-consensual deepfakes can maintain transatlantic cooperation despite broader challenges

Explanation

Shultz argues that despite deteriorating US-Europe relations, focusing on specific issues with broad bipartisan support can preserve cooperation. He cites the success in making non-consensual explicit deepfakes illegal as an example of achievable progress.


Evidence

Recent US legislation requiring platforms to remove deepfake videos within 48 hours of victim requests, achieved through bipartisan support


Major discussion point

Maintaining international cooperation through targeted legislation


Topics

Legal and regulatory | Human rights


M

Mikko Salo

Speech speed

130 words per minute

Speech length

685 words

Speech time

314 seconds

Investment in media education is crucial, particularly for children who need to learn critical thinking before using AI tools

Explanation

Salo emphasizes that media education investments are insufficient globally and that Finland is focusing on preparing the next generation. He argues that children need to develop thinking skills before they can effectively use AI tools.


Evidence

Finland’s work with government officials to retrain teachers and provide guidance for AI literacy, described as ‘whole of society security’


Major discussion point

Education as foundation for information integrity


Topics

Sociocultural | Development


Agreed with

– Morten Langfeldt Dahlback
– Alberto Rabbachin
– Moderator

Agreed on

Media literacy and education are fundamental to combating disinformation


People need to develop AI literacy and learn to think critically before using AI tools

Explanation

Salo argues that individuals must be able to think independently before they can properly utilize AI. He questions what an ‘AI native person’ looks like and emphasizes the importance of maintaining human critical thinking capabilities.


Evidence

Reference to asking ChatGPT for information and the need for people to assess AI outputs critically


Major discussion point

AI literacy and critical thinking skills


Topics

Sociocultural | Development


Agreed with

– Paula Gori
– Morten Langfeldt Dahlback
– Alberto Rabbachin
– Moderator

Agreed on

AI is creating new challenges for disinformation detection and response


A

Alberto Rabbachin

Speech speed

127 words per minute

Speech length

1426 words

Speech time

671 seconds

The Digital Services Act is pioneering regulation that addresses disinformation while protecting freedom of expression by focusing on algorithm functioning rather than content

Explanation

Rabbachin describes the DSA as the first global legal standard for tackling disinformation while preserving free speech rights. It examines how content is distributed through algorithms rather than the content itself, aiming to prevent malicious actors from abusing algorithms.


Evidence

The DSA provides the Commission with strong investigatory powers and increases transparency on social media platform functioning


Major discussion point

Regulatory approach focusing on algorithmic transparency


Topics

Legal and regulatory | Human rights


AI-generated disinformation is increasing and was particularly witnessed in recent European elections

Explanation

Rabbachin notes that EDMO monitoring shows AI-generated disinformation content is rising, with particular evidence during recent national elections in Europe. This represents a growing challenge that requires attention.


Evidence

EDMO monitoring data showing increased AI-generated disinformation during recent European national elections


Major discussion point

AI’s role in generating disinformation


Topics

Sociocultural | Legal and regulatory


Agreed with

– Paula Gori
– Morten Langfeldt Dahlback
– Mikko Salo
– Moderator

Agreed on

AI is creating new challenges for disinformation detection and response


The Code of Practice on Disinformation has grown from 16 signatories with 21 commitments to 42 signatories with 128 measures

Explanation

Rabbachin highlights the expansion of the voluntary code since 2018, showing increased industry engagement. The code has been integrated into the DSA framework, making it auditable and creating obligations for signatories.


Evidence

Specific numbers showing growth from 16 to 42 signatories and from 21 to 128 measures, with integration into DSA making it auditable


Major discussion point

Evolution of industry self-regulation


Topics

Legal and regulatory | Sociocultural


The EU supports an independent, multidisciplinary community of 120+ organizations whose fact-checking work is completely independent from the European Commission and governments

Explanation

Rabbachin emphasizes that the EU doesn’t directly determine what constitutes disinformation but supports an independent network of organizations. These organizations are selected by independent experts and maintain complete independence in their fact-checking and analysis work.


Evidence

EDMO network includes more than 120 organizations across the EU, Norway, and soon Ukraine and Moldova, selected by independent experts


Major discussion point

Independence of EU-supported fact-checking network


Topics

Human rights | Sociocultural


Agreed with

– Paula Gori
– Moderator

Agreed on

Multi-stakeholder approach is essential for addressing disinformation


Disagreed with

– Morten Langfeldt Dahlback

Disagreed on

Approach to combating disinformation: regulatory vs. independence concerns


The Digital Services Act requires platforms to provide data for research activities, with upcoming regulations to improve researcher access

Explanation

Rabbachin explains that the DSA obligates platforms to provide data for research purposes, and there is an upcoming delegated act that should further improve researcher access to platform data for their work.


Evidence

Reference to DSA obligations and upcoming delegated act to enhance researcher access


Major discussion point

Platform data access for research under DSA


Topics

Legal and regulatory | Development


Disagreed with

– Morten Langfeldt Dahlback
– Audience

Disagreed on

Platform data access and transparency


Media literacy appears across multiple policy frameworks and is supported through various EU initiatives and expert groups

Explanation

Rabbachin outlines how media literacy is integrated into various EU policies including the DSA and European Media Freedom Act. The EU supports media literacy through expert groups, pilot projects, and local initiatives tailored to member state needs.


Evidence

Media literacy provisions in DSA and European Media Freedom Act, media literacy expert group, European Board for Media Services subgroup, and Creative Europe pilot projects


Major discussion point

Comprehensive EU approach to media literacy


Topics

Sociocultural | Legal and regulatory


Agreed with

– Mikko Salo
– Morten Langfeldt Dahlback
– Moderator

Agreed on

Media literacy and education are fundamental to combating disinformation


A

Audience

Speech speed

119 words per minute

Speech length

375 words

Speech time

188 seconds

Academic access to platform data remains problematic, with platforms claiming definitional issues prevent compliance

Explanation

An audience member from Iceland studying platform impacts on democracy reports that large platforms are avoiding providing academic access by claiming the EU needs to make clearer definitions. This affects research into how platforms undermine democratic processes.


Evidence

Experience from Humboldt Institute’s Friends of the DSA group of academics trying to gain access


Major discussion point

Platform compliance with data access requirements


Topics

Legal and regulatory | Development


Disagreed with

– Morten Langfeldt Dahlback
– Alberto Rabbachin

Disagreed on

Platform data access and transparency


M

Moderator

Speech speed

133 words per minute

Speech length

816 words

Speech time

367 seconds

The session aims to understand who are friends and foes in the complicated situation of Internet Governance and the fight against disinformation

Explanation

The moderator frames the discussion as needing to identify allies and adversaries in the complex landscape of internet governance, particularly regarding disinformation challenges. This sets up the session as exploring the different stakeholders and their roles in addressing these issues.


Evidence

Session title and opening remarks about the complicated and unclear situation for Internet Governance


Major discussion point

Identifying stakeholders in internet governance and disinformation


Topics

Legal and regulatory | Sociocultural


Agreed with

– Paula Gori
– Alberto Rabbachin

Agreed on

Multi-stakeholder approach is essential for addressing disinformation


There is a shift from fact-checking to media literacy and user empowerment that needs reflection, particularly by policymakers

Explanation

The moderator highlights and questions this transition from reactive fact-checking approaches to proactive media literacy and user empowerment strategies. He specifically asks the European Commission representative whether they agree with this shift, indicating it’s a significant policy consideration.


Evidence

Direct question to Alberto Rabbachin about agreeing with the shift from fact-checking to media literacy


Major discussion point

Evolution from fact-checking to media literacy approaches


Topics

Sociocultural | Legal and regulatory


Agreed with

– Mikko Salo
– Morten Langfeldt Dahlback
– Alberto Rabbachin

Agreed on

Media literacy and education are fundamental to combating disinformation


Information integrity and reliable source identification become increasingly important due to AI-generated disinformation floods

Explanation

The moderator synthesizes the discussion by emphasizing that information integrity is becoming more crucial as artificial intelligence enables automatic generation of disinformation at scale. He argues that identifying reliable sources and detecting manipulation will become increasingly difficult, requiring a combination of regulatory approaches and media integrity work.


Evidence

Reference to the flood of automatically generated disinformation through AI and the increasing difficulty of identification


Major discussion point

Information integrity in the age of AI-generated content


Topics

Sociocultural | Legal and regulatory


Agreed with

– Paula Gori
– Morten Langfeldt Dahlback
– Alberto Rabbachin
– Mikko Salo

Agreed on

AI is creating new challenges for disinformation detection and response


A mix of rules and media integrity work by journalists is essential to face an unpredictable future

Explanation

The moderator concludes that addressing disinformation challenges requires combining regulatory frameworks (like those the EU is developing) with professional media integrity work conducted by journalists and media organizations. He presents this as necessary preparation for an uncertain technological and information landscape.


Evidence

Reference to European Union’s regulatory efforts and the work of media and journalists


Major discussion point

Combined regulatory and professional approach to disinformation


Topics

Legal and regulatory | Sociocultural


Agreements

Agreement points

Multi-stakeholder approach is essential for addressing disinformation

Speakers

– Paula Gori
– Alberto Rabbachin
– Moderator

Arguments

EDMO serves as a platform bringing together different stakeholders, similar to IGF’s approach to internet governance


The EU supports an independent, multidisciplinary community of 120+ organizations whose fact-checking work is completely independent from the European Commission and governments


The session aims to understand who are friends and foes in the complicated situation of Internet Governance and the fight against disinformation


Summary

All speakers agree that combating disinformation requires collaboration between multiple stakeholders including civil society, government, platforms, and international organizations, while maintaining independence of fact-checking organizations


Topics

Legal and regulatory | Sociocultural


Local context and specificities are crucial for effective disinformation response

Speakers

– Paula Gori
– Mikko Salo

Arguments

Local specificities in culture, politics, and language are crucial for understanding how disinformation impacts different countries


Investment in media education is crucial, particularly for children who need to learn critical thinking before using AI tools


Summary

Both speakers emphasize that disinformation responses must account for local cultural, political, and linguistic contexts, with tailored approaches for different countries and communities


Topics

Sociocultural | Development


Media literacy and education are fundamental to combating disinformation

Speakers

– Mikko Salo
– Morten Langfeldt Dahlback
– Alberto Rabbachin
– Moderator

Arguments

Investment in media education is crucial, particularly for children who need to learn critical thinking before using AI tools


There may be a necessary shift from fact-checking and debunking work toward more literacy work and empowering people to think critically


Media literacy appears across multiple policy frameworks and is supported through various EU initiatives and expert groups


There is a shift from fact-checking to media literacy and user empowerment that needs reflection, particularly by policymakers


Summary

All speakers agree that media literacy and critical thinking education are becoming increasingly important, potentially more so than reactive fact-checking approaches


Topics

Sociocultural | Development


AI is creating new challenges for disinformation detection and response

Speakers

– Paula Gori
– Morten Langfeldt Dahlback
– Alberto Rabbachin
– Mikko Salo
– Moderator

Arguments

The disinformation phenomenon is extremely complex with multiple variables, making it difficult to find simple solutions while protecting human rights


The shift toward AI chatbots creates invisible information consumption that fact-checkers cannot observe or respond to effectively


AI-generated disinformation is increasing and was particularly witnessed in recent European elections


People need to develop AI literacy and learn to think critically before using AI tools


Information integrity and reliable source identification become increasingly important due to AI-generated disinformation floods


Summary

All speakers acknowledge that AI is fundamentally changing the disinformation landscape, making detection more difficult and requiring new approaches to combat AI-generated false content


Topics

Sociocultural | Legal and regulatory


Similar viewpoints

Both speakers advocate for regulatory approaches that focus on algorithmic transparency and platform functioning rather than direct content moderation, emphasizing protection of fundamental rights and freedom of expression

Speakers

– Paula Gori
– Alberto Rabbachin

Arguments

Global frameworks emphasize fundamental rights, algorithmic transparency, multi-stakeholder approaches, and risk mitigation rather than content deletion


The Digital Services Act is pioneering regulation that addresses disinformation while protecting freedom of expression by focusing on algorithm functioning rather than content


Topics

Legal and regulatory | Human rights


Both express frustration with limited platform data access for research purposes, highlighting that platforms are not providing adequate transparency despite regulatory requirements

Speakers

– Morten Langfeldt Dahlback
– Audience

Arguments

Major platforms are limiting researcher access to data, with research APIs being more restricted than expected


Academic access to platform data remains problematic, with platforms claiming definitional issues prevent compliance


Topics

Legal and regulatory | Development


Both speakers express concerns about threats to democratic institutions and the challenges of maintaining independence while working to combat disinformation

Speakers

– Benjamin Shultz
– Morten Langfeldt Dahlback

Arguments

Bad actors are becoming more active in spreading information campaigns that undermine democracy and tear at social fabric


Independent actors like fact-checkers face challenges maintaining independence from governments while their objectives align with official bodies


Topics

Human rights | Sociocultural


Unexpected consensus

Shift from reactive fact-checking to proactive media literacy

Speakers

– Morten Langfeldt Dahlback
– Mikko Salo
– Alberto Rabbachin
– Moderator

Arguments

There may be a necessary shift from fact-checking and debunking work toward more literacy work and empowering people to think critically


Investment in media education is crucial, particularly for children who need to learn critical thinking before using AI tools


Media literacy appears across multiple policy frameworks and is supported through various EU initiatives and expert groups


There is a shift from fact-checking to media literacy and user empowerment that needs reflection, particularly by policymakers


Explanation

It’s unexpected that fact-checkers themselves (Dahlback) are advocating for a shift away from their traditional reactive approach toward proactive education, with broad agreement from policymakers and civil society representatives


Topics

Sociocultural | Development


Complexity requires nuanced rather than simple solutions

Speakers

– Paula Gori
– Morten Langfeldt Dahlback
– Alberto Rabbachin

Arguments

The disinformation phenomenon is extremely complex with multiple variables, making it difficult to find simple solutions while protecting human rights


There is insufficient knowledge about the scope and impact of disinformation, and conditions for gaining this knowledge are deteriorating


The Digital Services Act is pioneering regulation that addresses disinformation while protecting freedom of expression by focusing on algorithm functioning rather than content


Explanation

Unexpected consensus among different stakeholder types (NGO, fact-checker, policymaker) that simple solutions are inadequate and that the complexity of disinformation requires sophisticated, multi-faceted approaches


Topics

Legal and regulatory | Human rights


Overall assessment

Summary

Strong consensus exists on the need for multi-stakeholder cooperation, importance of media literacy education, challenges posed by AI-generated disinformation, and the necessity of protecting fundamental rights while addressing disinformation. There is also agreement on the importance of local context and the complexity of the phenomenon requiring nuanced solutions.


Consensus level

High level of consensus among speakers despite representing different sectors (EU policy, fact-checking, civil society, US perspective). The consensus suggests a mature understanding of disinformation challenges and broad agreement on fundamental principles, though implementation details may vary. This strong alignment across different stakeholder groups indicates potential for effective collaborative approaches to combating disinformation while preserving democratic values.


Differences

Different viewpoints

Approach to combating disinformation: regulatory vs. independence concerns

Speakers

– Morten Langfeldt Dahlback
– Alberto Rabbachin

Arguments

Independent actors like fact-checkers face challenges maintaining independence from governments while their objectives align with official bodies


The EU supports an independent, multidisciplinary community of 120+ organizations whose fact-checking work is completely independent from the European Commission and governments


Summary

Dahlback expresses concern about fact-checkers maintaining independence when their objectives align with government goals, while Rabbachin emphasizes that EU-supported organizations maintain complete independence from government influence


Topics

Human rights | Sociocultural


Platform data access and transparency

Speakers

– Morten Langfeldt Dahlback
– Alberto Rabbachin
– Audience

Arguments

Major platforms are limiting researcher access to data, with research APIs being more restricted than expected


The Digital Services Act requires platforms to provide data for research activities, with upcoming regulations to improve researcher access


Academic access to platform data remains problematic, with platforms claiming definitional issues prevent compliance


Summary

There’s disagreement about the effectiveness of current data access provisions – Rabbachin presents the DSA as providing adequate framework, while Dahlback and audience members report practical difficulties in accessing platform data for research


Topics

Legal and regulatory | Development


Unexpected differences

Effectiveness of current transparency and research access mechanisms

Speakers

– Alberto Rabbachin
– Morten Langfeldt Dahlback
– Audience

Arguments

The Digital Services Act requires platforms to provide data for research activities, with upcoming regulations to improve researcher access


Major platforms are limiting researcher access to data, with research APIs being more restricted than expected


Academic access to platform data remains problematic, with platforms claiming definitional issues prevent compliance


Explanation

This disagreement is unexpected because it reveals a gap between regulatory intentions and practical implementation. While the EU representative presents the DSA as providing adequate framework for research access, practitioners report significant difficulties in actually obtaining data, suggesting implementation challenges not acknowledged in policy discussions


Topics

Legal and regulatory | Development


Overall assessment

Summary

The main areas of disagreement center on the balance between regulatory approaches and independence concerns, the effectiveness of current data access mechanisms, and the optimal balance between different anti-disinformation strategies (fact-checking vs. media literacy vs. regulatory frameworks)


Disagreement level

Moderate disagreement with significant implications – while speakers share common goals of protecting information integrity and fundamental rights, they differ substantially on implementation approaches and the effectiveness of current measures. This suggests potential coordination challenges between policy makers, practitioners, and researchers in addressing disinformation effectively


Partial agreements

Partial agreements

Similar viewpoints

Both speakers advocate for regulatory approaches that focus on algorithmic transparency and platform functioning rather than direct content moderation, emphasizing protection of fundamental rights and freedom of expression

Speakers

– Paula Gori
– Alberto Rabbachin

Arguments

Global frameworks emphasize fundamental rights, algorithmic transparency, multi-stakeholder approaches, and risk mitigation rather than content deletion


The Digital Services Act is pioneering regulation that addresses disinformation while protecting freedom of expression by focusing on algorithm functioning rather than content


Topics

Legal and regulatory | Human rights


Both express frustration with limited platform data access for research purposes, highlighting that platforms are not providing adequate transparency despite regulatory requirements

Speakers

– Morten Langfeldt Dahlback
– Audience

Arguments

Major platforms are limiting researcher access to data, with research APIs being more restricted than expected


Academic access to platform data remains problematic, with platforms claiming definitional issues prevent compliance


Topics

Legal and regulatory | Development


Both speakers express concerns about threats to democratic institutions and the challenges of maintaining independence while working to combat disinformation

Speakers

– Benjamin Shultz
– Morten Langfeldt Dahlback

Arguments

Bad actors are becoming more active in spreading information campaigns that undermine democracy and tear at social fabric


Independent actors like fact-checkers face challenges maintaining independence from governments while their objectives align with official bodies


Topics

Human rights | Sociocultural


Takeaways

Key takeaways

Disinformation is a complex, multi-faceted phenomenon that erodes information integrity essential for democratic decision-making, requiring sophisticated responses rather than simple solutions


There is a fundamental shift occurring from traditional fact-checking approaches toward media literacy and user empowerment, particularly as AI chatbots make disinformation less observable to fact-checkers


Regulatory divergence between the US and Europe is hampering knowledge gathering about disinformation, with the US experiencing democratic backsliding while Europe maintains stronger regulatory frameworks


The EU’s approach focuses on algorithmic transparency and platform accountability rather than content censorship, exemplified by the Digital Services Act which addresses how content is distributed rather than the content itself


Education and media literacy, particularly for children, are becoming increasingly critical as AI-generated disinformation proliferates and people need to develop critical thinking skills before using AI tools


Independent fact-checking organizations face the challenge of maintaining credibility and independence while their objectives increasingly align with government anti-disinformation efforts


Multi-stakeholder cooperation through networks like EDMO (120+ organizations across EU) is essential, but must respect local specificities in culture, politics, and language


Platform data access for researchers remains severely limited despite regulatory requirements, hindering understanding of disinformation scope and impact


Resolutions and action items

The Code of Practice on Disinformation will be fully integrated into the DSA framework as of July 1st, making it auditable and creating binding obligations for platform signatories


A new EDMO hub covering Ukraine and Moldova will be established to address critical regional disinformation challenges


Upcoming EU delegated acts will improve researcher access to platform data for disinformation studies


Continued investment in media literacy programs across EU member states, with initiatives tailored to local needs


Maintenance of transatlantic cooperation through focus on areas of broad bipartisan support, such as banning non-consensual deepfakes


Unresolved issues

How to effectively monitor and respond to disinformation distributed through private AI chatbot interactions that are not publicly observable


How independent fact-checking organizations can maintain credibility while working closely with government anti-disinformation initiatives


How to obtain sufficient knowledge about the scope and impact of disinformation when platform transparency is decreasing


How to balance the need for platform regulation with protecting freedom of expression, particularly given varying cultural and political contexts


How to address the growing regulatory divergence between the US and Europe while maintaining effective global cooperation against disinformation


How to scale media literacy education effectively when current global investments in media education are minimal


How to ensure meaningful academic and researcher access to platform data despite platform resistance and technical limitations


Suggested compromises

Focus on areas of broad political consensus (like banning non-consensual deepfakes) to maintain transatlantic cooperation despite broader disagreements


Emphasize algorithmic transparency and platform accountability rather than content moderation to address free speech concerns while tackling disinformation


Combine regulatory approaches with voluntary industry cooperation through codes of practice that can evolve into binding obligations


Balance global principles with regional specificities, allowing for local adaptation while maintaining shared fundamental values


Shift emphasis from reactive fact-checking to proactive media literacy education to address the changing nature of information consumption


Maintain independence of fact-checking organizations through multi-stakeholder governance structures rather than direct government control


Thought provoking comments

We have to start to think about new ways, new creative ways to maintain the alliance, the Transatlantic Alliance in these rough times… Recently in the US, non-consensual explicit deepfakes, colloquially known as deepfake porn, have actually been made illegal… my hope is that with small steps like these that have been taken in the states that do have broad support, such as banning explicit deepfakes that are made non-consensually, my hope is that collaborating on these issues that Europe and the U.S. and countries all around the world can continue the dialogue

Speaker

Benjamin Shultz


Reason

This comment was insightful because it reframed the discussion from focusing on problems to identifying practical solutions for maintaining international cooperation despite political tensions. Shultz acknowledged the deteriorating transatlantic relationship while proposing a pragmatic approach of finding common ground on specific, less politically charged issues.


Impact

This shifted the conversation from a purely analytical discussion of disinformation challenges to a more solution-oriented dialogue about maintaining cooperation. It introduced the concept of incremental progress through bipartisan issues, which influenced subsequent speakers to consider practical approaches rather than just theoretical frameworks.


However, when you use chatbots like ChatGPT or Clod… the information that you receive from the chatbot is not in the public sphere at all. It’s a response generated on the basis of a prompt that you give to the language model, which means that we, as fact-checkers, for example, are unable, we can’t see what responses you’re getting… I think we might see a transition from more debunking and fact-checking work like what we’ve been engaged in so far to more literacy work

Speaker

Morten Langfeldt Dahlback


Reason

This was perhaps the most thought-provoking comment of the session because it fundamentally challenged the existing paradigm of fighting disinformation. Dahlback identified a critical blind spot in current approaches – that AI-generated responses in private conversations are invisible to fact-checkers, making traditional debunking methods obsolete.


Impact

This comment created a pivotal moment in the discussion, shifting focus from current regulatory frameworks to future challenges. It prompted the moderator to specifically ask the European Commission representative about this shift from fact-checking to media literacy, making it a central theme for the remainder of the session. It essentially redefined the problem space from observable public content to private, personalized AI interactions.


I think one of the core challenges that we face in responding to this problem is that we don’t know enough about the scope of the problem, and we don’t know enough about its impact… the conditions for gaining more knowledge about this problem have become worse over the past few months… because of regulatory divergence between Europe and the US

Speaker

Morten Langfeldt Dahlback


Reason

This comment was insightful because it identified a fundamental epistemological problem – that effective policy responses require understanding the scope and impact of disinformation, but the tools for gaining this knowledge are being eroded. It connected regulatory divergence to practical research limitations.


Impact

This comment established a critical foundation for understanding why the disinformation fight is becoming more difficult. It influenced subsequent discussion about data access for researchers and highlighted the interconnected nature of regulatory frameworks and research capabilities.


Once our objectives are aligned with the objectives of governments and of other regulatory and official bodies, I think it’s easy for others to throw our independence into doubt, because the alignment is too close

Speaker

Morten Langfeldt Dahlback


Reason

This comment revealed a sophisticated understanding of the paradox facing independent fact-checkers: the more successful they are in aligning with government anti-disinformation efforts, the more their independence and credibility can be questioned. It highlighted the delicate balance between cooperation and independence.


Impact

This comment introduced a nuanced discussion about the relationship between civil society organizations and government bodies in the fight against disinformation. It added complexity to what might otherwise be seen as straightforward cooperation, showing how political dynamics can undermine the very organizations trying to combat disinformation.


I think that’s where we have to find some sort of protection and ensure that before they first need to… they need to be able to think before they use AI. And I was just framing and I was actually asking the chat, how does it look like an AI native person? Because if we are not able to think ourselves, we are not able to use the AI as it’s meant at the moment

Speaker

Mikko Salo


Reason

This comment was thought-provoking because it identified a fundamental cognitive challenge of the AI era – that people need critical thinking skills before they can effectively use AI tools. The concept of ‘AI native persons’ and the need to ‘think before using AI’ highlighted a crucial educational gap.


Impact

This comment reinforced the emerging theme about the importance of education and media literacy over traditional fact-checking approaches. It provided concrete support for the shift in strategy that other speakers were advocating, emphasizing the foundational role of critical thinking skills.


Overall assessment

These key comments fundamentally reshaped the discussion from a traditional focus on current disinformation challenges and regulatory responses to a forward-looking examination of how the landscape is changing. Morten Langfeldt Dahlback’s insights about AI-generated content being invisible to fact-checkers and the erosion of research capabilities created pivotal moments that shifted the conversation toward future challenges and the need for new approaches. Benjamin Shultz’s reframing toward practical cooperation despite political tensions moved the discussion from problem identification to solution-seeking. Together, these comments transformed what could have been a routine policy discussion into a more sophisticated analysis of the evolving nature of information integrity challenges, the limitations of current approaches, and the need for adaptive strategies that emphasize education and literacy over traditional content moderation.


Follow-up questions

How can we better understand the scope and impact of disinformation across different domains?

Speaker

Morten Langfeldt Dahlback


Explanation

He identified this as a core challenge, noting that we don’t know enough about the scope of the problem and its impact, and that conditions for gaining knowledge have worsened due to regulatory divergence and platform restrictions


How can independent fact-checking organizations maintain their independence while working with governments and regulatory bodies on disinformation?

Speaker

Morten Langfeldt Dahlback


Explanation

He highlighted the difficult position independent actors face when their objectives align with governments, as it can throw their independence into doubt and affect audience trust


How can fact-checkers and researchers address misinformation generated by private chatbot interactions that are not publicly observable?

Speaker

Morten Langfeldt Dahlback


Explanation

He noted that chatbot responses are not in the public sphere, making it impossible for fact-checkers to observe and respond to misinformation delivered through these channels


What does an AI-native person look like and how should we prepare them for information integrity?

Speaker

Mikko Salo


Explanation

He emphasized the urgent need for AI literacy and questioned how people who grow up with AI will think critically about information, stressing that people need to be able to think before they use AI


What is the current status of academic access to platform data under the Digital Services Act?

Speaker

Thora (audience member)


Explanation

She highlighted that large platforms are dragging their feet on providing academic access, claiming the EU needs to make definitions first, which is hindering research on how platforms undermine democracy


How can EU rules help recognize AI propaganda and digital integrity violations?

Speaker

Mohamed Aded Ali (audience member)


Explanation

He asked about identifying when AI technologies are misused to deceive or manipulate, and how EU frameworks can address these threats to digital communication integrity


How much investment should be allocated to cognitive security and information integrity as part of societal security?

Speaker

Mikko Salo


Explanation

He referenced the 5% investment in security and suggested 1.5% should go to whole-of-society security including information integrity, but questioned what the appropriate investment level should be


How can we transition from debunking and fact-checking work to more effective literacy work?

Speaker

Morten Langfeldt Dahlback


Explanation

He suggested this transition may be necessary as more information consumption moves to private chatbot interactions, requiring individuals to assess information themselves rather than relying on public fact-checking


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government

Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government

Session at a glance

Summary

This OECD open forum discussion focused on implementing AI principles and using AI in government services, featuring two main segments with international experts and policymakers. The first segment examined the OECD AI Principles Implementation Toolkit, a practical initiative designed to help countries, particularly in the Global South, develop responsible AI policies tailored to their local contexts. Costa Rica’s Marlon Avalos explained how his country initiated this toolkit project after recognizing that while OECD principles provide strong ethical guidance, many developing countries lack the tools to translate these principles into actionable policies. The toolkit will feature a self-assessment component and repository of best practices to guide countries through AI governance challenges.


OECD’s Lucia Rossi detailed the toolkit’s structure, emphasizing its co-creation approach through regional workshops with countries in Asia, Africa, and Latin America. Mozilla’s Jibu Elias shared India’s community-driven approach to responsible AI, highlighting successful grassroots initiatives like student-developed accessibility tools and tribal community workshops that demonstrate how AI adoption must be locally rooted and people-centered. Niger’s Anne Rachel Ng discussed African countries’ opportunities and challenges, noting that while AI can address development barriers in healthcare, agriculture, and education, the continent faces significant infrastructure constraints, with only 22% of Africans having broadband access and many AI systems performing poorly on African populations due to training bias.


The second segment explored practical government AI implementation, with Norway’s Katarina de Brisis sharing successful use cases including AI-powered X-ray analysis that reduced patient waiting times by 79 days and tax fraud detection that increased detection rates from 12% to 85%. Korea’s Jungwook Kim emphasized three key pillars for effective AI adoption: innovation in data and infrastructure, inclusion to address digital divides, and strategic investment in capabilities. Both speakers stressed the importance of building employee competence, establishing legal frameworks, and ensuring data security when implementing AI in government services. The discussion concluded that successful AI implementation requires inclusive, context-sensitive approaches that prioritize trustworthiness, local capacity building, and international cooperation to prevent widening digital divides.


Keypoints

## Major Discussion Points:


– **OECD AI Principles Implementation Toolkit Development**: A collaborative initiative led by Costa Rica to create practical tools that help countries, especially in the Global South, translate the high-level OECD AI principles into actionable policies. The toolkit will feature self-assessment tools and region-specific guidance based on best practices from comparable countries.


– **Inclusive AI Development in Emerging Economies**: Speakers from India, Costa Rica, and Niger emphasized the importance of community-rooted, locally-contextualized AI solutions. Examples included student-developed accessibility tools, tribal community workshops, and addressing infrastructure challenges like connectivity and the digital divide.


– **AI Implementation in Government Services**: Discussion of practical AI applications in public sector services, with Norway sharing successful cases like AI-assisted medical diagnosis, tax fraud detection, and police transcription services. The focus was on improving efficiency while maintaining trustworthiness and citizen safety.


– **Challenges and Risks in AI Governance**: Identification of key barriers including inadequate infrastructure, skills gaps, data scarcity, and the need for inclusive governance frameworks. Speakers highlighted risks around bias, exclusion, and the importance of building public trust through transparent, accountable AI systems.


– **International Cooperation and Capacity Building**: Emphasis on the need for collaborative approaches to AI development, with particular attention to supporting developing countries through knowledge sharing, technical assistance, and ensuring no country is left behind in the AI transformation.


## Overall Purpose:


The discussion aimed to showcase practical approaches for implementing responsible AI governance globally, with a particular focus on supporting developing countries. The session sought to bridge the gap between high-level AI principles and concrete policy actions, while demonstrating real-world applications of AI in government services.


## Overall Tone:


The discussion maintained a collaborative and constructive tone throughout, characterized by knowledge sharing and mutual learning. Speakers were optimistic about AI’s potential while remaining realistic about challenges. The tone was particularly inclusive, with strong emphasis on ensuring global participation in AI development. Technical difficulties with some remote speakers added a touch of informality but reinforced the speakers’ points about digital infrastructure challenges. The session concluded on an encouraging note, emphasizing collective action and continued cooperation.


Speakers

– **Moderator (Yoichi Iida)**: Chair of the OECD Committee on Digital Policy


– **Marlon Avalos**: Online Director of Research Development and Innovation at the Ministry of Science and Technology from Costa Rica


– **Lucia Rossi**: Economist at Artificial Intelligence and Digital Emerging Technology Division from OECD


– **Jibu Elias**: Responsible Computing Lead for India from Mozilla


– **Anne Rachel Ng**: Director General at National Agency for Information Society, ANSI from Niger


– **Katarina de Brisis**: Deputy Director General at the Ministry of Digitalization and Public Governance from Norway, and long-standing representative at OECD Digital Policy Committee


– **Jungwook Kim**: Executive Director at Center for International Development from KDI


– **Seong Ju Park**: Policy Analyst at Innovative Digital and Open Government Division from OECD


Additional speakers:


None identified beyond the provided speakers names list.


Full session report

# OECD Open Forum: Implementing AI Principles and Government AI Services – Discussion Report


## Executive Summary


This OECD open forum at the Internet Governance Forum 2025 brought together international experts to discuss two critical aspects of AI governance: implementing AI principles through practical toolkits and deploying AI in government services. The session featured representatives from Costa Rica, Niger, India, Norway, and Korea, alongside OECD officials, creating dialogue between developed and developing nations on shared AI governance challenges.


The discussion was structured in two segments: first examining the OECD AI Principles Implementation Toolkit led by Costa Rica, and second exploring practical government AI applications. Key themes included the need for international cooperation, community-centered approaches to AI development, and addressing infrastructure challenges while scaling AI implementations effectively.


## Session Overview and Structure


The forum was moderated by Yoichi Iida, Chair of the OECD Committee on Digital Policy, who noted Japan’s role in proposing the OECD AI principles in 2016. The session transitioned to Seong Ju Park, Policy Analyst at OECD’s Innovative Digital and Open Government Division, who moderated the second segment on government AI services.


## Segment 1: OECD AI Principles Implementation Toolkit


### Initiative Background


Marlon Avalos, Online Director of Research Development and Innovation at Costa Rica’s Ministry of Science and Technology, explained the toolkit’s origins in Costa Rica’s experience developing their national AI strategy. Despite being politically stable and technically skilled, Costa Rica recognized significant challenges in translating OECD AI principles into actionable policies. As Avalos noted, “even a country like Costa Rica, politically stable, technically skilled and internationally connected, face these challenges, then surely other countries like us will too face that challenge.”


The initiative gained momentum when the Global Partnership on AI (GPAI) joined with the OECD AI community in July 2024, creating opportunities for broader collaboration on practical implementation tools.


### Toolkit Structure and Co-Creation Approach


Lucia Rossi, Economist at OECD’s Artificial Intelligence and Digital Emerging Technology Division, outlined the toolkit’s development through regional co-creation workshops across Asia, Africa, and Latin America. The toolkit will include:


– A self-assessment tool for countries to evaluate their AI governance capabilities


– Region-specific guidance tailored to different developmental contexts


– A repository of best practices from comparable countries


– Resources available through the OECD AI Policy Observatory on oecd.ai


The co-creation workshops serve dual purposes: informing toolkit development and creating knowledge-sharing networks among participating countries.


### Country Experiences and Perspectives


**India – Community-Driven Development**


Jibu Elias, Responsible Computing Lead for India at Mozilla, presented examples of grassroots AI initiatives including student-developed accessibility tools like WebBeast (a web accessibility checker) and PhysioPlay (a physiotherapy game), plus tribal community workshops. He emphasized that “responsible AI must be inclusive, accessible, and rooted in local values, focusing on communities most affected but least represented in AI development.”


Elias posed a fundamental question: “Don’t just ask who builds AI, ask whose future is it building? Because in countries like ours, trust is not a given, it’s earned. And when communities are trusted as co-creators, not just end users, they don’t just adopt technology, they transform it.”


**Niger – African Context and Challenges**


Anne Rachel Ng, Director General at Niger’s National Agency for Information Society (ANSI), highlighted both opportunities and significant barriers for AI adoption in Africa. She identified potential applications in healthcare, agriculture, and education, while noting critical infrastructure constraints: only 22% of Africans have broadband access, and 16 African countries are landlocked.


Ng addressed data bias issues, noting that only 2% of African-generated data is used locally, and facial recognition systems perform poorly on African populations. She referenced how pulse oximeters during COVID-19 were less accurate for people with darker skin tones due to training bias.


Despite challenges, Ng advocated for patient, culturally-grounded approaches, invoking an African saying: “Europeans have watches, we have time,” explaining that “taking the time to develop context-appropriate solutions is more important than rushing implementation without proper understanding.”


## Segment 2: AI in Government Services


### OECD Research Findings


Seong Ju Park presented OECD research showing that while AI offers significant potential for improving public services, implementation faces numerous barriers. AI use cases are unevenly distributed across government functions, with many initiatives remaining at the piloting stage rather than scaling to wider systems.


Government AI carries higher risks than private sector applications, including ethical, operational, exclusion, and public resistance risks. Some government functions face particular barriers, such as stricter data access rules and requirements for audit trails in public integrity functions.


### Country Implementation Examples


**Norway – Systematic Deployment**


Katarina de Brisis, Deputy Director General at Norway’s Ministry of Digitalisation and Public Governance, shared concrete examples of successful AI implementation:


– AI-powered X-ray analysis allowing patients to go home immediately instead of waiting, affecting about 2,000 patients


– Tax administration fraud detection improving from 12% to 85% detection rates, generating 110 million kroner in additional revenue


– Police transcription services streamlining administrative processes


Currently, 70% of Norwegian state agencies use AI, with targets of 80% by 2025 and 100% by 2030. Norway is investing in Norwegian language foundational models and computing infrastructure while implementing the EU AI Act.


**Korea – Strategic Framework**


Jungwook Kim, Executive Director at Korea’s KDI Center for International Development, outlined a three-pillar framework: innovation (data and infrastructure development), inclusion (addressing digital divides through accessibility improvements), and investment (strategic resource allocation).


Kim noted that AI involves “moving targets” requiring “agile measures to take care of the AI safety issues,” highlighting the need for adaptive governance frameworks.


## Key Themes and Consensus Points


### International Cooperation


All speakers emphasized the critical importance of international cooperation for successful AI development. The OECD toolkit represents collaborative efforts to bridge gaps between principles and practice, with support across different developmental contexts.


### Community-Centered Approaches


Multiple speakers stressed involving local communities, especially marginalized groups, in AI development to ensure solutions address real local needs rather than imposing external solutions.


### Infrastructure as Foundation


Representatives from developing countries highlighted connectivity and infrastructure limitations as fundamental barriers requiring attention before sophisticated AI governance frameworks can be effectively implemented.


## Challenges and Implementation Barriers


### Scaling from Pilots to Systems


A significant challenge identified across countries is moving AI initiatives from pilot projects to systematic implementation across government services.


### Capacity Building


The pace of AI development often exceeds the speed at which human capacity can be developed, creating mismatches between technological advancement and workforce readiness.


### Bias and Inclusivity


Current AI systems often fail to serve non-Western populations effectively due to bias and lack of representative training data, requiring both technical solutions and inclusive development processes.


## Next Steps and Commitments


The OECD committed to:


– Launching a comprehensive report on governing with AI


– Creating a dedicated hub for AI in the public sector on oecd.ai


– Organizing regional co-creation workshops, starting with ASEAN countries in Thailand


– Conducting global data collection on AI policies and use cases for the OECD AI Policy Observatory


Regional workshops will continue with African, Central American, and South American countries to inform toolkit development and build knowledge-sharing networks.


## Conclusion


This forum demonstrated both the potential and challenges of implementing responsible AI governance globally. While speakers showed strong consensus on fundamental principles—international cooperation, inclusive approaches, and context-sensitive solutions—they also acknowledged significant differences in implementation approaches based on developmental contexts and available resources.


The discussion revealed that successful AI implementation requires more than technical capabilities; it demands inclusive governance frameworks, robust infrastructure, community engagement, and sustained capacity building. The OECD AI Principles Implementation Toolkit represents an important step toward bridging the gap between high-level principles and practical implementation, supported by ongoing collaboration and knowledge sharing among countries facing similar challenges.


The path forward emphasizes balancing international cooperation with local ownership, ensuring that AI development serves community needs while building the foundational capabilities necessary for sustainable and equitable AI adoption.


Session transcript

Moderator: Thanks for watching, don’t forget to subscribe! Good afternoon everyone, and welcome to this open forum organized by the OECD. Thank you for joining us here in the Lillestorm and also online. This session brings together two connected discussions. Before jumping to the content, my name is Yoichi Iida, the chair of the OECD Committee on Digital Policy, and I’m very happy to be here together with all of you to moderate this session. So as first part, we begin with a panel on the OECD AI Principles Implementation Toolkit that is a practical initiative designed to support countries in strengthening their AI ecosystems and in adapting governance frameworks to local contexts. The toolkit will offer region-specific guidance to help bridge AI divides and advance responsible inclusive AI development. We will then transition to a second segment. focused on how governments are using AI in practice to improve public service deliveries and policy making. Since 2019, the OECD AI principles have guided national strategies and international cooperation on AI. The OECD AI principles also serve as the common foundation guiding the work of the global partnership on AI GPA, which recently joined with the OECD AI community in July 2024 in a new integrated partnership. Despite the transformative potential of AI, access to benefit of this technology remains uneven. Many countries face challenges related to infrastructure, human capacity, and the policy frameworks, along with greater exposure to risks such as task replacement. Today’s discussion will spotlight on policy efforts and initiatives that help close those gaps and promote inclusive AI ecosystems around the world. Please join me in welcoming our four distinguished speakers. Mr Marlon Avalos, Online Director of Research Development and Innovation at the Ministry of Science and Technology from Costa Rica. Second, on my left side, Ms Lucia Rossi, Economist at Artificial Intelligence and Digital Emerging Technology Division from OECD. Third, again online, Jibu Elias. Mr Jibu Elias, Responsible Computing Lead for India from Mozilla. And of course, last but not least, of course, Miss Anne Rachel Ng, Director General at National Agency for Information Society, ANSI from Nigel. Welcome. So we will first hear from the panelists about their experience in designing policies for fostering AI development and diffusion. After the first round of questions, we will go around for a short final reflection from each speaker. We will then hear from our second segment, which will talk about AI in the public sector. Here we will listen from three distinguished speakers. So on my right side, Miss Katarina de Brisis, Deputy Director General at the Ministry of Digitalization and Public Governance from Norway, and also the long-standing representative at OECD Digital Policy Committee. And Dr. Jungwook Kim, Executive Director at Center for International Development from KDI. And Miss Seong Ju Park, Policy Analyst at Innovative Digital and Open Government Division from OECD. So after the second segment on AI in the public sector, we will then open the floor for questions and answers session to hear from you and engage in a conversation. So we will monitor the online chat and take questions from the room also. So as we will be taking questions after the second segment, if you are joining online, feel free to ask your comments and put your questions in the chat box. If you are here with us in the room, please note your questions down on the note, and we will reply to them after the second segment of this open forum. So we start with the first segment, and I would like to start with the discussion on collaboration on trustworthy AI and hear about the designing AI policies and plans for the OECD toolkit to provide support to countries while elaborating these policies. So I will start with Mr. Avalos online. Mr. Avalos Martin from Costa Rica initiated the work on the OECD principles implementation toolkit. So Avalos, what prompted this initiative and what has been Costa Rica’s experience so far up until now in developing a national AI strategy from this perspective?


Marlon Avalos: Thank you very much, Ida-san, for giving me the floor. Good morning and good afternoon, dear colleagues connected virtually and there in Norway. It’s an honor to be in this Internet Governance Forum 2025 to tell a little bit about our experience, design our


Moderator: It seems to have some technical issues online, so please wait a little bit before we get him back, but otherwise we will proceed to the second speaker. Okay, so thank you for your patience. Before we get him back online, I would like to proceed to the second speaker. So, moving to Lucia, I would like to ask you, could you tell us more about the OECD AI principles implementation toolkit, with its objectives, structure, and how it aims to support governments with different levels of AI maturity in policymaking. What is the overall vision for this project going forward? Lucia, the floor is yours.


Lucia Rossi: Thank you, Yoichi, and good afternoon to the audience here and online. It’s a pleasure being here at the IGF. So, as Marlon was starting to say, this project was initiated by Costa Rica, and it started off from the consideration that AI opportunities are manifold across sectors and across the globe. And there are, of course, several potential transformative effects of AI across sectors, and we will hear later on about AI in the public sector. and as well as we know in agriculture, in health care, in education. And these opportunities are however difficult to seize for different countries as there are several bottlenecks that oftentimes prevent countries from having the capacity or the financial resources or the organizational resources to devise effective AI policies. So with these considerations in mind, we started with our delegates in the Global Partnership on AI and with the support of several countries including Japan, Costa Rica, the UK, France, Korea to developing what is a practical toolkit to implement the OECD principles. And just allow me to stay a bit on the principles that as we heard are the foundational document for the OECD in AI governance and that were adopted in 2019. And these principles have since then been the object of further work from the OECD to provide analytical analysis but also guidance on how to implement them. And they are constituted by five policy principles that are recommendations to governments around areas such as research and development, infrastructure, the policy environment, skills and jobs that are required to effectively implement AI across sectors and international cooperation. But also they are values-based principles that cover those values that all stakeholders should strive to embed in. in AI systems and, of course, to respect democratic values, fairness, transparency, explainability, accountability, among others. So what this toolkit aims to do is to provide really practical resources for implementing, facilitating adoption across countries with a specific focus on emerging and developing economies but tailored to the diversity of needs, preferences and available policy options across countries. So ultimately these resources will support advancing a more inclusive and effective AI governance. So in practice what this toolkit will look like is an online tool that will be composed of two main elements, the first one being a self-assessment that countries will be able to navigate autonomously and that would guide them through, on one hand, the areas that they would need to strengthen in AI governance and, on the other hand, priorities that they may want to establish. And then once this self-assessment is completed, the toolkit will provide suggestions based on best practices in regions that are at the same or that are comparable or have similar challenges so that they can take inspiration from these other countries. So the second component will build on the repository of national AI policies that we have on the OECD. AI Policy Observatory and that we aim to strengthen by collecting further information on national initiatives and regional initiatives. And in terms of the design of this toolkit, one key feature is really the co-creation component. So to develop the toolkit, we are currently planning and organizing, and we have already won such a regional workshop planned, to have really engaging engagement with countries, with the designer of AI policies, to understand better on one hand what are the key challenges they face when devising AI policies and when thinking about AI governance in their respective countries. And on the other hand, understand what resources they need, but also, as I mentioned, also understand what practices they have put in place to overcome these challenges. So we will have one first such workshop in Thailand, again supported by Japan with ASEAN countries, and we will then organize several others, for instance, with African countries, with Central American and South American countries. And we plan to make this tool as helpful as possible. I think I will stop here in the interest of time, and I’m just checking online if Marlon is there, but I don’t see him.


Marlon Avalos: So, please. Thank you, Ida-san. This is an immersive experience. I just lost my connection, and this is a challenge that developing countries like us face every day, every time. And, well, I was saying that our decisions to promote this OECD AI principle implementation toolkit wasn’t a coincidence. It was intentioned based on our national experience, as you can see. And we saw a reality while the OECD principles provide strong ethical guidance, and many countries, especially in the Global South, still lack the tools and institutions to turn those principles into actions. And our initiative was motivated by three aspects, necessity, urgency, and opportunity. Why necessity? Because the AI revolution is reaching all countries, but the capacities needed to adopt it responsibly are still unequally shared. Urgency, because we saw how quickly the benefits of AI were concentrated in advanced economies, leaving others behind, mainly in infrastructure and AI compute capacity. And opportunity, we have a chance to move from principles to concrete capabilities, mainly in developing countries. As context, we launched our national AI strategy last October. Currently, it’s being implemented with the support of over 50 entities across government, academia, civil society, and the private sector. And we learned a lot of things with this process. First, that a successful strategy must be grounded in reality. That’s why we try to focus on what truly matters, ensuring the ethical, secure, and responsible use. Development and Adoption of Artificial Intelligence, always with the people at the center and aligned with our national priorities and values. We prioritize key sectors where AI can add tangible value like health, education, agriculture, and public services, reflecting our development goals and our comparative advantage like environmental leadership, political stability, and international engagement. We also decide to build a solid foundation first based on our strategic objective, first, design flexible and adaptive regulatory frameworks, second, strengthen our R&D and innovation ecosystem, three, develop talent and skills for a changing world, and fourth, leverage AI in the public sector as tools for inclusion and efficiency. Our guiding principles emerge through a diverse benchmarking from the OECD and UNESCO recommendations to the Edochime AI process, Code of Conduct, and our national values rooted in peace and human dignity. As I said, we take the best parts of a lot of instruments. For example, we are so inspired by the European Union, AI Act, the U.S. AI Risk Management, and AI policies from our regional peers in Latin America, and several papers and reports. We don’t stop there. We conducted a national risk assessment based on real threats and prior experience. As you can see, we got inspired from a lot of instruments and references, but one of our most important conclusions was international collaboration is essential, mainly for developing countries like us. That’s why we embed these international leaderships as a core line of action. in our strategy based on our active participation in the OECD as a member in GPIE, in regional initiatives, European programs and other programs gave us the path to do it. Design a strategy like this wasn’t easy because we had a lot of goals, we had a lot of priorities, but we lack maybe the knowledge that other countries, that the developed countries have. Even a country like Costa Rica, politically stable, technically skilled and internationally connected, face these challenges, then surely other countries like us will too face that challenge. Just a few days ago, as chair of the OECD Ministerial Council meeting, Costa Rica proposed the development of this OECD AI principles implementation toolkit, a tool now endorsed by several countries, members and non-members. Getting to this point required months of preparation and negotiation with developing and developing countries, thanks to the support and talent of the OECD Secretariat, represented today by Lucia Rossi at the panel, to design a tool that will contain simple and actionable features to help governments in the struggle of building their own AI policies. A self-assessment and implementation guide that my colleague Lucia Rossi will explain more in her intervention or was explained in their intervention after my reconnecting issue. This is not only a Costa Rica initiative, this is a collective project that is entering a phase of regional co-creations with the support of countries like Japan, Korea, Italy, France, the European Union, Slovakia, Republic and other countries that are supporting us not only politically but financially. Countries of different regions, Central American region, Latin American region, Africa, and Asia, will help shape the toolkit’s next iterations, ensuring it adapts as technologies evolve and societies change. Lastly, the success of the toolkit will depend on two things, we hope. Customization, learning, and evidence. We need features that reflect local needs, processes that evolve over time, and metrics that show that AI is actually delivering value for people. Costa Rica offers its lessons based on our experience in the design of AI policies and the next tools and instruments that we are designing, for example, the sandbox, the regulations, and other instruments. And for sure, our full commitment to help turn the energy that we have and the support that countries gave us into actions so that no country, regardless the size or income, is left behind in the age of this artificial intelligence age that we face in this moment. I will stop here, and thank you, and my apologies for the connection issue. Thank you.


Moderator: Okay, thank you very much, Marlon, for your sharing the experience and your efforts on this very important initiative. If you allow me to talk a little bit about Japan’s experience, because we actually started this discussion in the year 2016 and proposed international discussion to OECD on AI principles. That was the beginning of the whole process, and the When people agreed on OECD AI principles, it is actually very comprehensive and very high-level. So some people said, you know, this is wonderful, but how we can make this into practical policies and actions? So now we are making efforts together, not by only Japan, but all together with Costa Rica, Korea and others, of course, backed by OECD Secretariat to guide the governments and other stakeholders to understand and make this very comprehensive set of principles into actions and practical policies. So this is a wonderful process and I’m very happy to hear these two presentations. And now I would like to move on to Jibu Elias from Mozilla online. So based on your experience and work with Mozilla and also your experience in India’s AI ecosystem, Jibu, what types of community-led or policy-driven initiatives have proven most effective in supporting responsible AI adoption, particularly in emerging economies? So what insights can we derive from these initiatives that could be relevant for policymakers? So Jibu, the floor is yours.


Jibu Elias: Thank you very much, Yoichi-san. It’s an honor to be here to share my experience building a responsible AI ecosystem in India, one of the most complex and dynamic tech environments in the world. So let’s begin with the foundational truth. In emerging economies, AI adoption is not just a question of capacity, but a larger question of context as well. Now responsible AI must be inclusive, accessible, and rooted in the values and live realities of people it should serve. And at Mozilla Foundation, we tried to meet these challenges head-on through a unique initiative called Responsible Computing Challenge or RCC. So in India, India has the world’s one of the largest or I think second largest developer population in the world. Yet there are a lot of shortcomings. For example, ethics, accessibility, and inclusion are almost entirely missing from the mainstream AI or even the tech curricula. The AI workforce in India is concentrated in elite urban clusters around cities like Bangalore or Gurgaon, leaving the smaller tier two, tier three cities, rural communities, and especially female workforce, women behind. And fundamentally, there’s a growing trust deficit. People are rightfully skeptical of opaque systems that affect their jobs, access to welfare, or even their freedom. So in RCC India, we decided not to start with rather abstract frameworks. We focused with people, especially students, academic faculties, women, community, marginalized groups like tribal population, and most importantly, first-generation learners who never had been asked what Responsive AI meant in their world. So from the starting point, we have designed a deeply localized and community-rooted approach where we begin with this question, what does Responsive AI mean to those who are most affected by it, but at the same time, least represented in building it? So our answer came from the communities we mentioned before, you know, students, marginalized communities, and importantly, young innovators across the country. So, one of the most striking experiences came from one of the colleges we worked with called Merian College in a hilly terrain in the Western Ghats campus in Kerala, where they became a testbed for some of our ethical tech innovations. One of its standout outputs is that it’s an AI-powered tool called WebBeast, which was developed by a first-year BCS student. So the tool is a lightweight, open-source, AI-powered accessibility widget, which was built as part of an equitable digital access course we developed with the university. It’s now been used by 30 websites across the world, and it even received a design patent from the Indian Patent Office. So this isn’t just about a student project. It’s a project that even first-year undergraduates, when empowered with ethical frameworks and open tools, can create global public goods. Similarly, we had another tool called PhysioPlay, which is a WhatsApp-based AI simulation tool for physiotherapy students designed to help them build diagnostic skills through gamified real-world casework built by a physiology student. SpeakBoost, a communication coaching platform that provides AI-powered feedback on fluency, filler words, grammar, tone, and supporting students prepare for interviews and presentations. TwinSage, which was developed by, again, a community of students from Maharashtra, coming from very marginalized groups who don’t have the privilege of access to buy high-tech technology or access. So they have developed this tool, which is a personal finance chatbot that teaches college students about budgeting, saving, financial planning through natural language conversations. So each of these tools we mentioned here are, first of all, community-based tools, community-routed tools, in some cases built by students for their peers, understanding what is lacking in their ecosystem, what they need to build. Their ethics-aware or responsible pillars are focused on AI. and Open Source Fuzz. They represent not just innovation, but how does democratized digital leadership look like. While students demonstrate what a responsible tech looks like from ground up, when we work with faculties, that led to initiatives addressing another critical frontier of AI, such as explainability in high-stake domains. So our work with the Indian Institute of Information Technology, IIT Kottayam, we developed something called the FactSets Lab, which launched a suite of explainability dashboards designed to tackle the larger black box problem in AI. So one of their dashboard helps users understand why an AI system made a decision using shared values, biased audits, and fairness metrics. Similarly, we developed a dashboard called AI Fora, which enables real-time interactive testing of AI predictions on real data sets, making model behavior visible to even non-technical users. And finally, IXI, which applies explainability to medical AI by using GradCams heat maps to highlight what influenced diagnostic decisions in retinal scans are like. So these are open, and the key impact is that they give everyday users and regulators and policymakers the ability to question and importantly correct the cost of AI. This is the future of public AI infrastructure, transparent, participatory, and grounded in accountability. And finally, our most powerful insights came not from labs, but from communities often left out of the AI conversation altogether. So at Lendi Institute of Engineering Technology in Andhra Pradesh, we ran an ideathon with students from rural and semi-urban backgrounds, where we guided activities in empathy, inquiry, creative problem-solving, and student-identified challenges in their own communities, from waste to safety, to waste management, to safety, to water scarcity. They even built tech-assisted AI, such as solutions, blueprints, and video pictures applying digital ethics in a more practical and personal way. In parallel, we also took RCC model even further to an area called Chintapalli, it’s a tribal area in Eastern Ghats where we conducted workshops with 56 tribal women, many of whom have never accessed AI tools before. We did it in the local language Telugu through participatory storytelling, visuals, and guided use of AI tools such as ChaiGPT and map real problems such as unemployment, safety, healthcare, and explore how AI could support micro-enterprises in herbal medicine, food production, and arts and crafts, some of which are the prominent employment methods these people use. The results were not just minimal tech exposure, but rather, I’m happy to say, it’s a tech transformation powered by a powerful tech like AI on cultural grounding, peer collaboration, and a dignity-first design. So these workshops proved that responsible AI doesn’t begin with the tools, it begins with trust. So while wrapping up, let me say the main lessons from India’s AI ecosystem and what we may see works in emerging economics or global south or global majority as we call it is, you know, especially having worked in the intersection of civil society, academia, and national policy is that we need ecosystems that are locally rooted, capacity-driven, and above all, people-centered. And the most powerful lesson here is that don’t just ask who builds AI, ask whose future is it building? Because in countries like ours, trust is not a given, it’s earned. And when communities are trusted as co-creators, not just end users, they don’t just adopt technology, they transform it. So if you want AI that is safe, just truly inclusive, we must design not only the code and policy, but the humility, memory, and imagination as well. So thank you very much for this opportunity. I will stop here.


Moderator: Okay, thank you very much Jibu for this wonderful story and it’s great to hear about these experiences from the ground and congratulations on your work. India’s success with DPI and the digital public goods is a powerful example of good policy practice and the I’m very happy to hear that the responsible AI principles is just backing up such success for digitalization. So now I would like to turn to Miss Anne Rachel Ng. So from your perspective as a digital policy leader in Africa, what are some of the key opportunities and also challenges for African countries in developing inclusive and context-sensitive AI policies? How can international initiatives like the OECD AI policy toolkit better support countries in that region? What key considerations should be made? So Anne Rachel, the floor is yours.


Anne Rachel: Thank you very much and good afternoon everybody. I’m actually very happy to go after Jibu in this conversation because he gave a lot of examples that I can relate to. But I’m going to start by saying that in the Global AI Index, it places African countries in general among waking up, nascent when it comes to AI investment, innovation, implementation in general. So for example, Egypt, Nigeria, the United States, and the United Kingdom. So it places African countries in general Kenya are nascent, while Morocco, South Africa and Tunisia are waking up. There’s a lot more waking up and I really hope that we will soon, you know, all be graduating. So, we do face opportunities and challenges and those are basically in developing everything that is, as Jibu said, inclusive, context-sensitive AI policies and I’m pretty sure international initiatives like the OECD toolkit can help because it does give, you know, a few places where we can pick and choose and also make sure that we look into others’ experiences so that in doing what we have to do to get there, we do it the right way. So, in terms of the key opportunities, for example, we do have development barriers that can be alleviated. AI can accelerate our, you know, critical sectors like healthcare and in there, for example, if I take the case of my own country that is Niger, we started years ago something that is called a program on smart villages and we started with healthcare. So, you know, with telemedicine that is geared mostly to skin diseases because it was easier to take pictures, send them to dermatologists and, you know, get treatment to people and also, you know, disease prediction. But it’s gone to the point that, for example, I have a group of young people right now at home who are looking at, who are working on a device Remember the oximeter during COVID where you would measure oxygen levels in a person that was sick? So, a lot of researchers found out that, for example, that is a device that does not gauge oxygen level the right way in people who are melanated. So, they decided that it was something that they wanted to do during COVID. And today, they actually have a little device that is just like the regular oximeter, but whereby the light can penetrate a darker skin and give true measures of what oxygen level is in a person’s body. And in, you know, agriculture, precision farming, agroforestry is one of the places where we’ve been using AI, education, of course, personalized learning, and use of languages in general. Because this is a place where nobody grows up with just one language in Africa, hardly any. It is important that when we’re trying to get context AI, that we make sure that to get trustworthiness, we have people who really understand what’s in it for them. We tend to have policies that are geared to people who can read and write what we call the official languages. And then we forget that in our settings, we have about, you know, 60 to 80% of our populations that are still rural. So, they don’t speak English, they don’t speak French. And if you want them to be part of this, you really have to explain it to them in their language. And that’s also one of the reasons why the little applications that the kids are doing in terms of voice recognition softwares that can be helped whether in can help people whether in FinTech or health care and others are really helping. We do have another opportunity which is simply that we have a very young population in the region. Now we do need a skilled workforce so capacity development and deployment is something that we absolutely need. Now one of the big constraints that also come with that is that kids do not grow at the speed that artificial intelligence is growing and when I take my again my own country we have you know 65 plus that are under 25 and at least 50 percent that are under 15. So it’s really a very young population and as much as we need a lot of capacity building we need to give it time you know for the kids to get to the point where we can have a sound and real workforce. We do have local innovation ecosystems that are really growing AI solutions that are geared to the local place as in for example using a lot of mobile financial tools to make sure that from the women agriculturists all the way to land sharing and deeds recognition in rural areas things like that are being done. So those are you know some of the key opportunities and of course we do have the regular challenges that everybody know in terms of infrastructure. Again, when I take the case of my country in the African region, we have 16 countries that are landlocked. So connectivity infrastructure is already something that is quite dear. We do have, we still have, you couple that together and you have only about 22% of Africans that have broadband access. So that’s still something that we need to work on because it exacerbates the divide. In terms of policy and regulatory frameworks, we have a deep fragmentation also because many countries like cohesive strategies, AI strategies, or harmonized regulations. So you do have uneven implementation or even, you know, missed cross-border collaboration opportunities because we don’t, in as much as we have some of the ministerial meetings, for example, on the continent to talk about one policies or the other, we absolutely need, if we’re going to use, you know, AI tools in fintech, we have to make sure that the finance minister understand it’s not only the, you know, the technology or digital minister talking about this. We need to make sure that if we’re going for a national ID that the person who is going to be ID’d understands the reasons why and what it’s bringing to them in terms of advantages. And we also need all of the different government ministries like, you know, Interior, Defense, all the way to the National Data Protection Agency to talk together to make sure that whatever is put in place is really protecting people’s privacy. So we also have, of course, data scarcity and bias. As I just said, we do have a lot of facial recognition systems, for example, globally that are trained on non-African data, and they perform poorly on our people. And in general, right now, at the minute that we’re speaking, only about 2% of data generated on the continent is used locally. So it’s basically hard to get real data back to our institutions just because it’s managed by global platforms that do not necessarily want to share it readily with us. And again, we do have the capacity constraints just because the governments struggle to keep pace also with AI advancements. So you’ve started barely talking about data privacy, that your agricultural minister wants to put a lot more stuff in there and environment and everything. So all of it collides to the point where, honestly, we come to a point where governments are having a hard time sieving through the little data that they have to make sense of it locally. So toolkits like the OECD one can help. because it also, but it can only help if we really have modular, flexible guidance also, you know, on low resource settings. So things like Jibu and Malone talked about are really interesting and can be looked at and that can also help some of our countries because it’s much better to have real case uses than generic benchmarks because those are great but, you know, they don’t really show you how to make it work at home. So in terms of capacity building, we need definitely more AI research centers. We need policy training and knowledge sharing, you know, with platforms. How to make that happen is also one of the things that we’re grappling with and we need all of that, of course, so that our own policymakers can be empowered to have discussions at the level where, you know, policies can then trickle down to people. And, of course, we all talked about it, inclusive governance, you know, we must include, globally, we must include African voices to avoid the one-size-fits-all, you know, that I love the idea of that oximeter because we’ll kind of sew it and we’ve experienced it somehow, but to suddenly discover that this little device that we were trusting to do something is not really doing the right thing for us was really eye-opening. So it’s important that, you know, everybody’s perspective is taken into making sure that these global toolkits are done the right way, looking at people’s I guess particular settings and context. So in terms of also, you know, developing public-private partnerships, it is something that is starting to get traction more in the region, because of course government cannot do it all. We absolutely need the private sectors to, you know, to be part of this whole process and to also make sure that they can develop things that they can, you know, live on. So I think having said that, I will conclude by saying, I’m going to say something that makes us all laugh all the time, that maybe a few here can relate to, at least if you’re African. We do say Europeans have watches, we have time. So I’m just saying this to plead for, you know, taking the time to do things, because rushing into doing things that are not geared to the context just keeps us behind more than anything, because people do not understand what it is we’re trying to do or where is it that we’re trying to get to. So it is truly important that everybody is listened to, everybody is part of the discussion, everybody is brought to the table, so that that trustworthiness that we want be not only in AI, but in the whole, you know, digital transformation that we want to see in our countries. Thank you.


Moderator: Okay, thank you very much for this very insightful presentation, Rachel. And I saw a lot of commonalities between your country and our country, like issues such as education or maybe the… Spreading the idea is always very difficult in Japan. But I really agree to the point that, you know, the inclusive multi-stakeholder approach is definitely important in this section. So thank you very much. And for the sake of the time, I thank to all speakers for those rich and insightful contributions. And now we turn to the second part of our session, which will focus on how governments are using AI in practice across key public functions. This is also of relevance to the previous segment, as the OECD AI Policy Toolkit will have information on sectors, including the public sector. So I’m pleased to hand over the moderation to Ms Seon-Joo Park, Policy Analyst at Innovative Digital and Open Government Division of OECD, who will lead the next segment. So Seong Ju , please.


Seong Ju Park: Thank you, Mr Moderator. So before we start, I just want to quickly share, I was recently back in my country, Korea, and then I needed to explain about a history of a palace to the friends that I had over there. Before, I would have used search for the palace and then try to understand the information I find, and then explain that in English to my friends. But this time, I just asked ChatGPT to give me a very catchy explanation about this palace. And then I just played it for my friends. So AI has changed many aspects of our lives, how we communicate, how we seek information. And this is affecting governments as well. This is accelerating digital transformation of public sector, changing how governments work, how government design and deliver policies and services. And it also changed the expectations and needs of the citizens and businesses that they serve. So before I invite two panelists that I have here, I want to quickly present to you some of the OECD findings on AI in government. May I have the slides? Okay, can we put it on, it’s in a presenter mode. Thank you. So AI as a tool has a great potential to support government to improve productivity, responsiveness and accountability. So AI can automate and streamline mundane and repetitive tasks, allocating efforts of the public servants into more meaningful tasks, interacting with citizens and businesses. And AI can also support tailoring processes and personalizing government services to meet users’ needs. AI can enhance decision-making by supporting governments with making sense of the present and better forecasting for the future. AI can also support enhancing accountability and detecting anomalies. Also, AI can help governments unlock opportunities for external stakeholders. So how can governments enjoy this potential benefits in a trustworthy and in a responsible way? So the work on governing with AI seeks to address this question of how to develop and then deploy trustworthy AI in governments. So AI is a tool that can be used to develop and then deploy trustworthy AI in governments. So AI is a tool that can be used And then we started with looking at what has been done across different government functions. So we have conducted analysis of use cases across 11 government functions covering three broad categories, police functions, key government processes, and service and justice. So in total, 200 use cases were selected and based on the influence, diversity, and then representativeness. So based on the use cases, literature research, and then recent policy developments, we were able to identify key trends, shaping the current state of play, major risk, and then implementation challenges that governments face, and also explore potential use and future pathways. So the first trend we saw is that use cases are unevenly distributed. There are a number of potential explanations for this distribution that you see on the screen. I won’t be able to share all, but I will try to share a couple with you. The policy functions most represented tend to be the ones most in the public eye, potentially suggesting a focus on areas that have immediate visibility to citizens. Factors going into this could involve both more demands from the citizens, but also a desire among governments and political leaders to visibly demonstrate a value of using AI in government. And we also found that some functions face particular barriers or complexities, such as particularly stricter rules on data access and sharing, and then stricter requirements for thorough audit trails in public integrity. Another trend we saw is a big emphasis on automating and personalizing processes and services. The slightly more than half of the examined use cases, they seek to contribute to the automation, streamlining and tailoring and personalization of government processes and services, particularly in justice, public services, civic participation and regulatory design and delivery. We found that four out of 10 use cases seek to enhance decision-making, sense-making and forecasting, with most concentrated in public services, regulation and civic participation. I have some of the use cases, I won’t be able to go through them, but the OECD is planning to launch the more comprehensive report where you will be able to find all 200, well, some of the 200 use cases that I mentioned earlier. So I will skip through different use cases we found for supporting different functions of the government, and then I will go to the most important topic when it comes to government AI in government. So it might not be a fun topic for us to discuss, but government’s use of AI is quite different from use of AI in private sector. It comes with higher risk. It has potential dangers and threats that could seriously harm individuals’ lives and also society as a whole. It could potentially undermine public’s trust in government, the legitimacy of government’s AI use, and even democratic values. So to address these concerns, it is important to continuously consider potential risks that may not exist today, and here on the screen you see the general five risks that we identified through our research. So these risks range from ethical risk, operational risk, exclusion risk, to public resistance and missed opportunities and then it was mentioned during earlier segment, a widened gap between the public sector and then private sector capacities. So beyond grappling with this risk, we also found that governments all face a number of implementation challenges when seeking to develop and use AI. So we found that there are many use cases, however, they remain at a piloting stage and many are struggling to scale the pilots into the wider systems or services. And also there is a large room for improvement when it comes to actionable guidelines. Also governments need to navigate a rigid regulatory environment. And the next challenge is shared by almost every government on this planet. There are inadequate data, skills and infrastructure in the public sector. In addition, governments need to better understand the cost and benefit of AI in the public sector. Many are still, the cost and benefits around the use of AI in government is quite unknown. That makes it quite difficult for policy makers to make business cases to scale up their AI efforts. So to support governments to mitigate this risk and then overcome these challenges, we have worked together with the OECD and then the partner countries. on a framework to support government’s AI efforts. This is an evolving framework and then we only seek to provide guidance for countries so that they can continue on through this AI journey. As you can see, the framework is organized around three sections. So first is a level of engagement. This includes the different stakeholders that needs to be engaged in building the foundations for a responsible use of AI in the public sector. Our previous speakers, they mentioned involving different stakeholders not only from the public sector but also from private academia users into devising AI strategies or developing AI solutions. So it’s important to have a different actors around the table. Then the second element is enablers. So enablers include areas where policy actions can be prioritized to establish a solid enabling environment and then unlock the full-scale adoption of AI in the public sector. So these areas include governance and capabilities, collaborations and partnerships where policymakers currently indicate the existence of important constraints and shortcomings. The last element is on guardrails. So guardrails include options for policy levers that governments can consider developing for a responsible, trustworthy, and human-centered use of AI in the public sector. So this can range from soft laws and guidance as standard to legislation on AI enforcement mechanisms or oversight bodies. So this work is a part of a bigger OECD project called a Horizontal Project on Thriving with AI. Under this project, there are specific deliverables focusing on AI in government. So as I mentioned before, there will be a OECD report on governing with AI, which goes much deeper and then into details of what I just quickly presented with you. And then there will be a dedicated hub for AI in the public sector. It will be on oecd.ai. It will be sort of a repository for policymakers, practitioners and researchers. And we are planning to have a global data collection exercise on AI policies and then use cases, which will also be presented through OECD AI Policy Observatory. So thank you very much. That was my very quick presentation on, just to give you an idea on where OECD research has been when it comes to AI in government. So now I would like to invite two panelists to hear from them on what it means for governments to harness AI in practice. So the first topic will be around the AI opportunities in the public sector. So I would like to invite Katarina first. So Katarina, Norway has been exploring AI to enhance the efficiency and then effectiveness of public sector services. Can you share with us some early impact that you see or early impact that you expect from Norway’s AI use in government?


Katarina de Brisis: Thank you, Seju, for your introduction. Artificial intelligence tends to be perceived by now as being chart GPT or the likes, but actually artificial intelligence is much more than that. and it has been applied and used in Norway already in some years in many government services, especially in the health sector. We have several applications that are really having a practical impact on people’s lives. One case is our Vestreviken hospital community where they implemented AI analysing x-rays of fractures and it really saved time for the patients. By 79 days many patients, about 2000, were able to go home immediately instead of waiting for results of their analysis and their diagnosis and this is now being deployed to several other hospitals. So it gives really practical benefits on the ground. Then we have our Norwegian tax administration that has used AI, developed an AI model which combined with the rule-based models analysed deposits of tax returns looking for missing returns on lending out secondary homes and that actually led to 85% detection rates across or opposite of 12% before and it saved taxpayers for 110 million kroner. It was the additional revenue they were able to produce. In cancer treatment there are hospitals using AI to produce three-dimensional maps of internal organs to have more direct radiation treatment and it already has been in use since 2023. There are also hospitals using AI to give more accurate analysis of patients with epilepsy that can diagnose it precisely and quickly. Our state loans, student loan agency uses AI to control housing, they do housing verification checks just to be sure that no public funds are misappropriated by students saying we are living there while they are actually living some other place and collecting grants for that. Our police authorities use AI for transcriptions of interrogations when they do an investigation on crime, which saves a lot of time because the AI just transcripts spoken language into written language immediately. So, in general, we have a lot of this kind of use already, but still the potential is very great. We have done a state employer survey in 2025 which asked 200 state agencies about their use of AI and 70% answered that they actually use AI in their daily work. I think this is mostly generative AI systems which they use for things like designing job advertisements, case processing, analytical work, helping them in recruitment procedures and this kind of stuff. But this is state, we have about 400 or more municipalities which are very small and potential there is much greater. We still have a way to go there and what we also need to work on is better tools to assess benefits from AI. We have cases, we have real benefits already produced. but to look across the board and have some tools that will really give us methodological background to assess benefits of introducing AI in various sectors and government levels that we need to work more on. So I’ll just maybe finish here.


Seong Ju Park: No, thank you. That is a really important point. I think many governments are still trying to find out the best way to measure what benefits and then impact use of AI actually brings in long run. But some of the cases that you share, it clearly demonstrated that use of AI has supported the Norwegian governments to enhance efficiency, but then also enhancing people’s lives, saving them time and money. Then I will go to Dr. Kim. So, Dr. Kim, you have conducted extensive research on Korea’s use of digital technology including AI and for enhancing services and policies. Could you describe the key elements that governments should consider when using AI to ensure that it is used effectively, innovatively, and inclusively?


Jungwook Kim: Thank you. So Korea is ranked as one of the leading countries in OECD Digital Government Index, which was published recently. And as Anne Rachel states, there’s some different stages of development or adoption of the AI technologies in the public side. But I’m pretty sure that there is no graduation. That means it’s a long journey and it’s a gradual change of the government services delivered to the public. So I’d like to explain and address some of the key enablers or pillars of the Korean history of AI adoptions or digitalization in public services. And the first one is innovation. Innovation is change. Change in your life, change in what you work, and change in what you address your needs and deliver your services. So for the innovations, we have three different aspects of the targets. One is data. So we need open data, but we need machine-readable data, which is not available before. That means we need to make some researches on development in data and accessing data and processing data and make aggregation and changing the data formats so that we can utilize it in AI adoption. So we need change in the data. And the other one is infrastructure. So each and every government has infrastructure in dealing with and providing public services, but for the adoption of AI, it has challenging aspects. That means we need innovative ways to take care of the current infrastructure of the public service delivery. And the third one is public service delivery itself. That means we need brand new citizen-centric AI public services, which was not available before. However, it is feasible, and we need to coin out the way we provide the services and the way we try to address the demand by the public citizens. So those are innovations like data and infrastructure and public service development. And the other pillar is inclusion. That means we should take care of the digital divide for sure. and we experience digital divide, even Korea experience digital divide and by gender, by region, by income, also by the education. So, we need enhanced accessibility for the AI adoption for the public services, of course. That might be enhancing accessibility through AI-driven hyper-personalized services by the public sector or focus on the effectiveness, access of the vulnerable peoples or isolated groups so that they can take care, they can assess easily for the public services. The other one is capability. So, we need educate, we need train the public officers as well as the citizens because it’s changing the life, you know, innovative way to take care of the issues. So, we need inclusion which can be separated into accessibility enhancement, also education for the capacity building and capability increasement, also ability increase. So, those are two pillars of the AI adoptions in public services. And the final element is investment. That requires huge resources in adopt and develop and deploy those AI services into the public sector. So, innovation, inclusion requires investment. So, you should spend your money wisely and strategically in order for the AI adoptions.


Seong Ju Park: Thank you very much. So, the data infrastructure and then also innovating how we approach public services design. These are the hot topics of many of our delegates as well. And then also the last point on investment. It has put a bigger more spotlight now with AI that governments needs to have a strategic thinking around how they’re going to use public money on investing and digital or AI related systems and services. And then I cannot agree with you more that we are in this a long journey and then I often say a moving target so there’s always new target every day and then no graduation. I think this is for many governments around the world. So thank you for sharing the key policy issues. I understand that your work also includes elements to support safe and trustworthy use of AI. How could governments use AI in a responsible and trustworthy way? What are the key elements to avoid or mitigate the five risks that I mentioned earlier?


Jungwook Kim: Thank you. So the question is dealing with the safety or security issues around the AI and it’s a public organization or public body’s work in dealing with AI technology and there is a big challenges in dealing with those security issues especially for the public services because many a lot of actually detailed personal data is accumulated and processed in public body. That means we need to secure those safety of data and that’s top priority. That means so we need citizens rights to their personal data not just you know giving access to the personalized data for anyone or some of the stakeholders. Rather you need a bit of consensus and you get the explicit consent in utilizing and processing your personal data for sure. So it’s a way to secure some of the safety issues in dealing with personalized and privacy issues. And the second one is security issues. So it’s vulnerable to like hacking or other malicious function of the system. So attention to the open network infrastructure and mobile-based system has some challenges of those ones. So system itself should be secured, should be designed and maintained in a safer way. So that is another challenge for dealing with safety issues. And the third one is AI safety and governance. So as you said, it’s moving targets. Then we need agile measures to take care of the AI safety issues. So we have examples which breaches privacy, which has harm for the citizens’ safety issues. And there are so many dialogues on those ones, but each and every country should establish those safety and governance in the right manner, in a sound system, so that they can take care of those issues for real time and even in advance, to minimize the risks or uncertainty associated with the AI implementation. So those ones are not independent from our daily life. Rather, it reflects and it has great impact on the daily life of the citizens in large scale. So for the public services, AI employment and deployment, those ones should be narrated clearly in the AI safety and governance in one specific country. So that’s what we can say based upon the Korean experience.


Seong Ju Park: Thank you very much. It’s really important when it comes to data, but also sensitive data, because we found that some of the sectors, including social security sectors and then healthcare sector, justice sector, they hold a lot more sensitive and then personal information on the users, the citizens and businesses. And I cannot agree with you more on the need for the agile governance. I think many governments have been talking about being more agile, but I think we haven’t reached. the point yet, but it will be important to have governance that will allow the proactive measures and also timely measures to prevent or mitigate this risk that we see. Katarina, I will come to you. What concrete initiatives in Norway is Norway implementing to ensure that AI in government are safe and trustworthy?


Katarina de Brisis: Thank you. Let me start with a couple of reflections on the challenges when implementing AI. For us, one of the main challenges is leadership and competence level in government agencies. So actually that will underpin also trustworthy use of AI. If we have managers in public state agencies who understand both the opportunities and risks associated with using AI, and we know that 60% of our state organizations already implement measures to increase employee competence. These are the people who are actually working and managing artificial intelligence-based systems. And 43% created internal guidelines for using AI. So this is sort of building a fundament within each public agency. And one other important issue is also a dialogue between the employer, the management, and the employee representatives. So that also those people feel having a finger on the levers of how AI is being deployed and implemented in the agency. And then the second thing is the access to data. I agree with Professor Kim that this is a crucial issue. and we have a number of very good quality registers and we have been working for several years on opening those data but the opening must happen in a responsible way and that’s why in Norway at least to access personal data for a purpose of training and using AI systems requires legal basis. So you cannot just say okay I have this data, I pick them and then I train a system and here we go. You have to have legal basis. So you have to procure this legal basis that may take time with the legislative branch. When you have that then you can proceed but within also safety and security constraints. Another thing is of course to have a legal framework in general. So Norway is now working on implementing the EU AI Act which will be our overarching framework for using AI in Norway. We aim at implementing in on par with EU countries to create level playing field. We have already in 2020 put forward a national strategy for AI which put forward seven principles for responsible and trustworthy AI. Those principles are further endorsed by our new digitalization strategy for Norway, published just recently in the fall of 24. And in that strategy our government has a very ambitious goals. They want public agencies to adopt AI at very quick rate. Already in 25 80 percent of public agencies should use AI and by 2030 100 percent. So as you see it’s very ambitious. but we work quite diligently to make it possible both within agencies as I was describing but also on national level by investing in Israel infrastructure so we the government has invested for example 40 million kroner early in developing foundational models in our language that is Norwegian and Sami languages based on our societal values so that we have systems that really reflect who we are not the whole of internet sorry and then the other investment we are looking at is our high performance computing infrastructure to enable actually develop and train AI at a scale that is needed so that’s also the investment and this infrastructure may be used by both public and private entities for example we have one startup which is called Digifarm that uses AI to help farmers predict what to sow when and where and so on and this requires computing power so this kind of infrastructure may provide it even to small startups and companies and of course in enforcing the AI act we will establish or are establishing a national enforcement structure so we will have one authority in our national communication authority that will look at the compliance with the AI act and we will also establish AI Norway which will be an arena for sharing experience guidance and testing in a regulatory sandbox of systems in a very safe environment before deploy so and we will also collaborate with our data protection authority on this regulatory sandbox so and also systems which are trained on personal data may be tested there. So this is sort of outline how do we work both at the micro level and macro level on enabling trustworthy and safe AI in Norway. Thank you.


Seong Ju Park: Thank you very much for sharing Norway’s experience and then what Norway has been doing. I remember about this one tool implemented by one of the countries I would name and it was supposed to support the public sector officials with their job but then the users of that tool wasn’t really trained on how to use the tool and at the end what was supposed to be a supporting tool ended up making wrong decisions for the government. So I see how building employee capabilities and then the leadership around AI and digital is a key to ensuring trustworthy use of AI. So I will conclude our segment here. Thank you very much to you both and then I give the floor back to you, Mr. Moderator.


Moderator: Okay, thank you very much for the wonderful discussion to all the speakers in segment two and I apologize to all the speakers in segment one that I cannot come back to you for finalizing comment but now I will open the floor for audience for any questions or comments on both segments of this open forum. So no questions. So I’m sorry the time has run out, so sorry about the management but I hope you enjoyed the discussion and if you have any questions please contact directly to the individual speakers and let me also share we will have another session on AI tomorrow morning at nine o’clock in the conference hall. So thank you very much to all the audience and also to all the speakers and this session is closed. Thank you very much.


M

Marlon Avalos

Speech speed

116 words per minute

Speech length

951 words

Speech time

487 seconds

Costa Rica initiated the toolkit based on their national AI strategy experience, recognizing that developing countries need practical tools to implement OECD principles

Explanation

Costa Rica proposed the OECD AI principles implementation toolkit after experiencing challenges in developing their own national AI strategy. They recognized that while OECD principles provide strong ethical guidance, many countries in the Global South lack the tools and institutions to turn those principles into concrete actions.


Evidence

Costa Rica launched their national AI strategy in October with support from over 50 entities across government, academia, civil society, and private sector. They conducted national risk assessment and benchmarked against various international instruments including EU AI Act and U.S. AI Risk Management.


Major discussion point

OECD AI Principles Implementation Toolkit Development


Topics

Development | Legal and regulatory


Agreed with

– Lucia Rossi
– Moderator
– Seong Ju Park

Agreed on

Practical implementation tools and frameworks are needed to translate AI principles into action


International collaboration is essential for developing countries, requiring customization, learning, and evidence-based approaches

Explanation

Avalos emphasized that even politically stable and technically skilled countries like Costa Rica face challenges in AI policy development, making international collaboration crucial. The success of the toolkit depends on features that reflect local needs, processes that evolve over time, and metrics that show AI delivers value for people.


Evidence

Costa Rica’s active participation in OECD, GPAI, regional initiatives, and European programs provided the foundation for their strategy. The toolkit is now endorsed by several countries and entering regional co-creation phase with support from Japan, Korea, Italy, France, EU, and Slovakia.


Major discussion point

International Cooperation and Knowledge Sharing


Topics

Development | Legal and regulatory


Agreed with

– Anne Rachel
– Jibu Elias

Agreed on

International collaboration is essential for AI development, especially for developing countries


Technical connectivity issues demonstrate daily challenges that developing countries face in AI implementation

Explanation

During the session, Avalos experienced connection problems which he used as a real-time example of the infrastructure challenges that developing countries face every day. This technical difficulty illustrated the broader connectivity and infrastructure barriers that hinder AI adoption in the Global South.


Evidence

Avalos lost his internet connection during the presentation and had to reconnect, stating ‘this is a challenge that developing countries like us face every day, every time.’


Major discussion point

Challenges in AI Implementation for Developing Countries


Topics

Infrastructure | Development


Agreed with

– Anne Rachel

Agreed on

Infrastructure and connectivity challenges are major barriers for developing countries


L

Lucia Rossi

Speech speed

108 words per minute

Speech length

682 words

Speech time

377 seconds

The toolkit will provide self-assessment tools and region-specific guidance through co-creation workshops to help countries bridge AI divides

Explanation

The OECD AI principles implementation toolkit will be an online tool with two main components: a self-assessment that guides countries through areas to strengthen in AI governance and priorities to establish, followed by suggestions based on best practices from comparable regions. The toolkit emphasizes co-creation through regional workshops to understand challenges and resource needs.


Evidence

The toolkit will build on the OECD AI Policy Observatory repository and include regional workshops starting with one in Thailand supported by Japan with ASEAN countries, followed by workshops with African countries and Central/South American countries.


Major discussion point

OECD AI Principles Implementation Toolkit Development


Topics

Development | Legal and regulatory


Agreed with

– Marlon Avalos
– Moderator
– Seong Ju Park

Agreed on

Practical implementation tools and frameworks are needed to translate AI principles into action


J

Jibu Elias

Speech speed

140 words per minute

Speech length

1209 words

Speech time

515 seconds

Responsible AI must be inclusive, accessible, and rooted in local values, focusing on communities most affected but least represented in AI development

Explanation

Elias argued that responsible AI adoption in emerging economies requires focusing on context and inclusion rather than just capacity. The approach should center on people, especially students, marginalized communities, women, and first-generation learners who are most affected by AI but least represented in building it.


Evidence

Mozilla’s Responsible Computing Challenge in India worked with students, academic faculties, women, tribal populations, and first-generation learners. They conducted workshops with 56 tribal women in Chintapalli using local language Telugu and participatory methods.


Major discussion point

Community-Led and Inclusive AI Development


Topics

Development | Human rights principles


Agreed with

– Anne Rachel

Agreed on

Community-centered and inclusive approaches are crucial for responsible AI development


Students and marginalized communities can create global public goods when empowered with ethical frameworks and open tools

Explanation

When provided with ethical frameworks and open-source tools, even first-year students can develop innovative AI solutions that address real community needs. These tools demonstrate that democratized digital leadership can produce globally relevant innovations rooted in local contexts.


Evidence

Examples include WebBeast (AI-powered accessibility widget by a first-year BCS student, now used by 30 websites globally and received Indian design patent), PhysioPlay (WhatsApp-based AI simulation for physiotherapy students), SpeakBoost (communication coaching platform), and TwinSage (personal finance chatbot for college students).


Major discussion point

Community-Led and Inclusive AI Development


Topics

Development | Sociocultural


Trust is earned through community co-creation rather than just end-user adoption, requiring locally rooted and people-centered ecosystems

Explanation

Elias emphasized that in countries like India, trust in AI systems is not automatically given but must be earned through inclusive development processes. When communities are treated as co-creators rather than just end users, they don’t just adopt technology but transform it to meet their specific needs and contexts.


Evidence

The tribal women workshops in Chintapalli resulted in tech transformation powered by AI but grounded in cultural values, peer collaboration, and dignity-first design. The workshops proved that responsible AI begins with trust-building rather than just tool deployment.


Major discussion point

Community-Led and Inclusive AI Development


Topics

Development | Sociocultural


Agreed with

– Marlon Avalos
– Anne Rachel

Agreed on

International collaboration is essential for AI development, especially for developing countries


A

Anne Rachel

Speech speed

124 words per minute

Speech length

1723 words

Speech time

833 seconds

AI opportunities exist in healthcare, agriculture, and education, but require addressing infrastructure constraints and capacity building for young populations

Explanation

African countries have significant opportunities to use AI for development challenges in key sectors, but face constraints in connectivity and need time to build workforce capacity. The young population (65% under 25 in Niger) represents both an opportunity and a challenge requiring patient capacity development.


Evidence

Niger’s smart villages program started with telemedicine for skin diseases, students developed an oximeter for melanated skin during COVID, and various AI applications in precision farming, agroforestry, personalized learning, and voice recognition software for local languages.


Major discussion point

AI opportunities exist in healthcare, agriculture, and education, but require addressing infrastructure constraints and capacity building for young populations


Topics

Development | Infrastructure


Agreed with

– Jibu Elias

Agreed on

Community-centered and inclusive approaches are crucial for responsible AI development


Only 22% of Africans have broadband access, and 16 African countries are landlocked, creating connectivity challenges

Explanation

Infrastructure limitations significantly constrain AI adoption across Africa, with low broadband penetration rates and geographic challenges for landlocked countries. These connectivity issues exacerbate digital divides and limit access to AI technologies and services.


Evidence

Specific statistics: 22% broadband access rate across Africa, 16 landlocked countries in the region, and connectivity infrastructure costs are particularly high for these geographic constraints.


Major discussion point

Challenges in AI Implementation for Developing Countries


Topics

Infrastructure | Development


Agreed with

– Marlon Avalos

Agreed on

Infrastructure and connectivity challenges are major barriers for developing countries


Data scarcity and bias affect AI systems, with only 2% of African-generated data used locally and facial recognition systems performing poorly on African populations

Explanation

African countries face significant data challenges where most locally generated data is managed by global platforms and not shared back with local institutions. Additionally, many AI systems trained on non-African data perform poorly for African users, creating bias and effectiveness issues.


Evidence

Only 2% of data generated on the African continent is used locally, and facial recognition systems globally are trained on non-African data and perform poorly on African people.


Major discussion point

Challenges in AI Implementation for Developing Countries


Topics

Human rights principles | Development


Taking time to develop context-appropriate solutions is more important than rushing implementation without proper understanding

Explanation

Anne Rachel emphasized the African saying ‘Europeans have watches, we have time’ to advocate for patient, context-sensitive AI development. Rushing into AI implementation without proper understanding of local contexts and needs keeps countries behind rather than advancing them.


Evidence

The African proverb ‘Europeans have watches, we have time’ and emphasis on the need for everyone to be part of the discussion and brought to the table for trustworthy digital transformation.


Major discussion point

International Cooperation and Knowledge Sharing


Topics

Development | Sociocultural


Agreed with

– Marlon Avalos
– Jibu Elias

Agreed on

International collaboration is essential for AI development, especially for developing countries


Disagreed with

– Katarina de Brisis

Disagreed on

Pace and approach to AI implementation


K

Katarina de Brisis

Speech speed

120 words per minute

Speech length

1219 words

Speech time

606 seconds

Norway has successfully implemented AI in healthcare for X-ray analysis, tax administration for fraud detection, and police transcription services, showing practical benefits

Explanation

Norway has deployed AI across multiple government sectors with measurable impacts on efficiency and citizen services. These implementations demonstrate concrete benefits including reduced waiting times for patients, increased detection rates for tax fraud, and time savings for police investigations.


Evidence

Vestreviken hospital’s AI x-ray analysis saved 2000 patients 79 days of waiting time; tax administration AI increased detection rates from 12% to 85% and generated 110 million kroner in additional revenue; police use AI for automatic transcription of interrogations.


Major discussion point

AI Applications in Government Services


Topics

Economic | Legal and regulatory


70% of Norwegian state agencies use AI in daily work, but municipalities and benefit assessment tools need further development

Explanation

While AI adoption is widespread among state agencies for tasks like job advertisements and case processing, there’s still significant potential for expansion, particularly at the municipal level and in developing better tools to assess AI benefits across different sectors and government levels.


Evidence

Survey of 200 state agencies showed 70% use AI daily, mostly generative AI for designing job advertisements, case processing, analytical work, and recruitment procedures. Norway has 400+ municipalities with much greater potential for AI adoption.


Major discussion point

AI Applications in Government Services


Topics

Economic | Development


Leadership competence, legal frameworks, and employee training are crucial for trustworthy AI implementation in government

Explanation

Successful AI implementation requires managers who understand both opportunities and risks, proper legal basis for data access, and comprehensive employee training. Norway emphasizes building competence within agencies and ensuring dialogue between management and employee representatives.


Evidence

60% of state organizations implement measures to increase employee competence, 43% created internal AI guidelines, and Norway requires legal basis for accessing personal data for AI training purposes.


Major discussion point

Trustworthy AI Governance and Risk Management


Topics

Legal and regulatory | Development


Agreed with

– Jungwook Kim
– Seong Ju Park

Agreed on

Data security and governance are critical for trustworthy AI in government


Norway is implementing the EU AI Act and investing in Norwegian language foundational models and computing infrastructure

Explanation

Norway is creating a comprehensive AI governance framework by implementing the EU AI Act alongside national strategies and investments. The government has ambitious goals for AI adoption across public agencies while building supporting infrastructure including language-specific models and computing resources.


Evidence

Norway aims for 80% of public agencies to use AI by 2025 and 100% by 2030; invested 40 million kroner in Norwegian and Sami language foundational models; establishing AI Norway for experience sharing and regulatory sandbox testing.


Major discussion point

Trustworthy AI Governance and Risk Management


Topics

Legal and regulatory | Infrastructure


Disagreed with

– Anne Rachel

Disagreed on

Pace and approach to AI implementation


M

Moderator

Speech speed

99 words per minute

Speech length

1453 words

Speech time

874 seconds

Japan’s leadership in proposing OECD AI principles in 2016 and current efforts to make comprehensive principles into practical policies

Explanation

Japan initiated international discussions on AI principles at the OECD in 2016, leading to the comprehensive OECD AI principles. Now Japan is working with other countries to translate these high-level principles into practical policies and actionable guidance for governments and stakeholders.


Evidence

Japan proposed international discussion to OECD on AI principles in 2016, which became the foundation for the OECD AI principles. Japan is now collaborating with Costa Rica, Korea and others, backed by OECD Secretariat, to make the comprehensive principles into practical policies.


Major discussion point

OECD AI Principles Implementation Toolkit Development


Topics

Legal and regulatory | Development


Agreed with

– Marlon Avalos
– Lucia Rossi
– Seong Ju Park

Agreed on

Practical implementation tools and frameworks are needed to translate AI principles into action


J

Jungwook Kim

Speech speed

124 words per minute

Speech length

887 words

Speech time

428 seconds

Korea’s AI adoption requires innovation in data, infrastructure, and service delivery, plus inclusion through accessibility and capability building

Explanation

Kim outlined Korea’s approach to AI adoption in government through three key pillars: innovation (requiring changes in data formats, infrastructure, and citizen-centric services), inclusion (addressing digital divides and enhancing accessibility), and investment (strategic resource allocation for AI development and deployment).


Evidence

Korea is ranked as one of the leading countries in OECD Digital Government Index. The approach focuses on machine-readable data, innovative infrastructure adaptation, and brand new citizen-centric AI public services, while addressing digital divides by gender, region, income, and education.


Major discussion point

AI Applications in Government Services


Topics

Development | Economic


Data security, system security, and agile AI governance are essential for protecting citizens’ personal data and rights

Explanation

Kim emphasized that public sector AI use requires top priority on data security due to the accumulation of detailed personal data in government systems. This includes securing citizens’ rights to their personal data, protecting against system vulnerabilities, and establishing agile governance measures to address AI safety issues in real-time.


Evidence

Public bodies process a lot of detailed personal data requiring explicit consent for utilization, systems are vulnerable to hacking and malicious functions, and Korea has established AI safety and governance measures based on their experience with privacy breaches and citizen safety issues.


Major discussion point

Trustworthy AI Governance and Risk Management


Topics

Human rights principles | Legal and regulatory


Agreed with

– Katarina de Brisis
– Seong Ju Park

Agreed on

Data security and governance are critical for trustworthy AI in government


Investment in AI adoption requires strategic resource allocation across innovation, inclusion, and infrastructure development

Explanation

Kim argued that successful AI adoption in government requires substantial and strategic investment across multiple areas. The three pillars of innovation, inclusion, and investment are interconnected, requiring governments to spend resources wisely and strategically to achieve effective AI deployment in public services.


Evidence

Korea’s experience shows that AI adoption requires huge resources to develop and deploy AI services in the public sector, and strategic investment is needed across data development, infrastructure adaptation, and capability building.


Major discussion point

International Cooperation and Knowledge Sharing


Topics

Economic | Development


S

Seong Ju Park

Speech speed

125 words per minute

Speech length

2095 words

Speech time

1003 seconds

AI use cases are unevenly distributed across government functions, with emphasis on automation and personalization of processes

Explanation

OECD research analyzing 200 AI use cases across 11 government functions found uneven distribution, with policy functions most represented being those in the public eye. Over half of the use cases focus on automating, streamlining, and personalizing government processes and services, particularly in justice, public services, and civic participation.


Evidence

Analysis of 200 use cases across 11 government functions covering policy functions, key government processes, and service and justice. Slightly more than half seek automation and personalization, while four out of 10 use cases enhance decision-making and forecasting.


Major discussion point

AI Applications in Government Services


Topics

Legal and regulatory | Economic


AI in government carries higher risks than private sector use, including ethical, operational, exclusion, and public resistance risks

Explanation

Government AI use differs significantly from private sector applications due to higher stakes and potential for serious harm to individuals and society. These risks can undermine public trust in government, legitimacy of AI use, and democratic values, requiring continuous consideration of potential future risks.


Evidence

Five identified risks: ethical risk, operational risk, exclusion risk, public resistance, and widened gaps between public and private sector capacities. Government AI use has potential dangers that could seriously harm individuals’ lives and society as a whole.


Major discussion point

Trustworthy AI Governance and Risk Management


Topics

Human rights principles | Legal and regulatory


Agreed with

– Katarina de Brisis
– Jungwook Kim

Agreed on

Data security and governance are critical for trustworthy AI in government


The OECD framework provides guidance on stakeholder engagement, enabling environments, and guardrails for responsible AI use

Explanation

The OECD has developed an evolving framework organized around three sections to support government AI efforts: level of engagement (involving different stakeholders), enablers (policy actions for solid enabling environment), and guardrails (policy levers for responsible and trustworthy AI use).


Evidence

The framework includes stakeholder engagement from public, private, academia, and users; enablers covering governance, capabilities, collaborations and partnerships; and guardrails ranging from soft laws and guidance to legislation and oversight bodies.


Major discussion point

International Cooperation and Knowledge Sharing


Topics

Legal and regulatory | Development


Agreed with

– Marlon Avalos
– Lucia Rossi
– Moderator

Agreed on

Practical implementation tools and frameworks are needed to translate AI principles into action


Agreements

Agreement points

International collaboration is essential for AI development, especially for developing countries

Speakers

– Marlon Avalos
– Anne Rachel
– Jibu Elias

Arguments

International collaboration is essential for developing countries, requiring customization, learning, and evidence-based approaches


Taking time to develop context-appropriate solutions is more important than rushing implementation without proper understanding


Trust is earned through community co-creation rather than just end-user adoption, requiring locally rooted and people-centered ecosystems


Summary

All three speakers from developing countries emphasized that successful AI implementation requires international cooperation, context-sensitive approaches, and community involvement rather than top-down or rushed implementations


Topics

Development | Legal and regulatory


Infrastructure and connectivity challenges are major barriers for developing countries

Speakers

– Marlon Avalos
– Anne Rachel

Arguments

Technical connectivity issues demonstrate daily challenges that developing countries face in AI implementation


Only 22% of Africans have broadband access, and 16 African countries are landlocked, creating connectivity challenges


Summary

Both speakers highlighted infrastructure limitations as fundamental barriers to AI adoption, with Avalos experiencing connectivity issues during the session and Anne Rachel providing specific statistics about African connectivity challenges


Topics

Infrastructure | Development


Community-centered and inclusive approaches are crucial for responsible AI development

Speakers

– Jibu Elias
– Anne Rachel

Arguments

Responsible AI must be inclusive, accessible, and rooted in local values, focusing on communities most affected but least represented in AI development


AI opportunities exist in healthcare, agriculture, and education, but require addressing infrastructure constraints and capacity building for young populations


Summary

Both speakers emphasized the importance of involving local communities, especially marginalized groups, in AI development and ensuring that solutions address real local needs and contexts


Topics

Development | Human rights principles


Data security and governance are critical for trustworthy AI in government

Speakers

– Katarina de Brisis
– Jungwook Kim
– Seong Ju Park

Arguments

Leadership competence, legal frameworks, and employee training are crucial for trustworthy AI implementation in government


Data security, system security, and agile AI governance are essential for protecting citizens’ personal data and rights


AI in government carries higher risks than private sector use, including ethical, operational, exclusion, and public resistance risks


Summary

All three speakers agreed that government AI implementation requires robust governance frameworks, data protection measures, and comprehensive risk management approaches due to the sensitive nature of government data and services


Topics

Human rights principles | Legal and regulatory


Practical implementation tools and frameworks are needed to translate AI principles into action

Speakers

– Marlon Avalos
– Lucia Rossi
– Moderator
– Seong Ju Park

Arguments

Costa Rica initiated the toolkit based on their national AI strategy experience, recognizing that developing countries need practical tools to implement OECD principles


The toolkit will provide self-assessment tools and region-specific guidance through co-creation workshops to help countries bridge AI divides


Japan’s leadership in proposing OECD AI principles in 2016 and current efforts to make comprehensive principles into practical policies


The OECD framework provides guidance on stakeholder engagement, enabling environments, and guardrails for responsible AI use


Summary

Multiple speakers agreed on the need for practical tools and frameworks to help countries implement high-level AI principles, with the OECD toolkit representing a collaborative effort to bridge the gap between principles and practice


Topics

Legal and regulatory | Development


Similar viewpoints

Both speakers emphasized the potential of young people and marginalized communities to drive AI innovation when given proper support and tools, highlighting examples of student-led innovations and the importance of capacity building for young populations

Speakers

– Jibu Elias
– Anne Rachel

Arguments

Students and marginalized communities can create global public goods when empowered with ethical frameworks and open tools


AI opportunities exist in healthcare, agriculture, and education, but require addressing infrastructure constraints and capacity building for young populations


Topics

Development | Sociocultural


Both speakers from developed countries shared experiences of successful government AI implementations with measurable benefits, emphasizing the importance of systematic approaches to AI adoption across multiple government sectors

Speakers

– Katarina de Brisis
– Jungwook Kim

Arguments

Norway has successfully implemented AI in healthcare for X-ray analysis, tax administration for fraud detection, and police transcription services, showing practical benefits


Korea’s AI adoption requires innovation in data, infrastructure, and service delivery, plus inclusion through accessibility and capability building


Topics

Economic | Development


Both speakers highlighted how AI systems often fail to serve non-Western populations effectively due to bias and lack of local data representation, emphasizing the need for locally developed and culturally appropriate AI solutions

Speakers

– Anne Rachel
– Jibu Elias

Arguments

Data scarcity and bias affect AI systems, with only 2% of African-generated data used locally and facial recognition systems performing poorly on African populations


Trust is earned through community co-creation rather than just end-user adoption, requiring locally rooted and people-centered ecosystems


Topics

Human rights principles | Development


Unexpected consensus

The importance of taking time for proper AI implementation rather than rushing

Speakers

– Anne Rachel
– Jungwook Kim

Arguments

Taking time to develop context-appropriate solutions is more important than rushing implementation without proper understanding


Korea’s AI adoption requires innovation in data, infrastructure, and service delivery, plus inclusion through accessibility and capability building


Explanation

It was unexpected to see both a developing country representative (Anne Rachel) and a developed country representative (Jungwook Kim) agree on the importance of patient, gradual AI implementation. This consensus suggests that even advanced countries recognize AI adoption as a long-term journey requiring careful planning rather than rapid deployment


Topics

Development | Sociocultural


The universal challenge of measuring AI benefits in government

Speakers

– Katarina de Brisis
– Seong Ju Park

Arguments

70% of Norwegian state agencies use AI in daily work, but municipalities and benefit assessment tools need further development


AI use cases are unevenly distributed across government functions, with emphasis on automation and personalization of processes


Explanation

Despite Norway’s advanced AI implementation, both speakers acknowledged that even leading countries struggle with measuring AI benefits and achieving even distribution across government functions. This suggests that assessment and scaling challenges are universal, not just issues for developing countries


Topics

Economic | Legal and regulatory


Overall assessment

Summary

The speakers demonstrated strong consensus on several key areas: the need for international cooperation and practical implementation tools, the importance of inclusive and community-centered approaches, the critical role of data governance and security in government AI, and the recognition that AI implementation is a gradual process requiring patience and proper planning. There was also agreement on the challenges of infrastructure, capacity building, and the need for context-sensitive solutions.


Consensus level

High level of consensus with complementary perspectives from different regions and development stages. The agreement spans both technical and social aspects of AI implementation, suggesting a mature understanding of AI governance challenges across different contexts. This consensus provides a strong foundation for international cooperation and the development of practical tools like the OECD AI principles implementation toolkit.


Differences

Different viewpoints

Pace and approach to AI implementation

Speakers

– Anne Rachel
– Katarina de Brisis

Arguments

Taking time to develop context-appropriate solutions is more important than rushing implementation without proper understanding


Norway is implementing the EU AI Act and investing in Norwegian language foundational models and computing infrastructure


Summary

Anne Rachel advocates for a patient, time-intensive approach emphasizing the African saying ‘Europeans have watches, we have time’ and warns against rushing AI implementation without proper context understanding. In contrast, Katarina presents Norway’s very ambitious timeline with 80% of public agencies using AI by 2025 and 100% by 2030, representing a rapid deployment approach.


Topics

Development | Sociocultural


Unexpected differences

Infrastructure challenges as demonstration vs. systematic barrier

Speakers

– Marlon Avalos
– Anne Rachel

Arguments

Technical connectivity issues demonstrate daily challenges that developing countries face in AI implementation


Only 22% of Africans have broadband access, and 16 African countries are landlocked, creating connectivity challenges


Explanation

While both speakers address infrastructure challenges, Avalos uses his technical difficulties as a real-time demonstration of connectivity issues, suggesting these are manageable obstacles that can be worked around. Anne Rachel presents infrastructure limitations as fundamental systematic barriers requiring substantial structural changes. This represents an unexpected difference in framing the same core issue – whether infrastructure challenges are symptomatic problems or foundational barriers to AI adoption.


Topics

Infrastructure | Development


Overall assessment

Summary

The discussion shows remarkably high consensus on core principles (inclusion, context-sensitivity, international cooperation) but reveals subtle yet significant differences in implementation philosophy and pace


Disagreement level

Low to moderate disagreement level with high strategic implications. While speakers largely agree on goals, their different approaches to timing, community engagement, and implementation strategies could lead to significantly different outcomes in AI policy development. The disagreements are more about methodology and pace rather than fundamental objectives, but these differences could be crucial for policy effectiveness and adoption success in different regional contexts.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasized the potential of young people and marginalized communities to drive AI innovation when given proper support and tools, highlighting examples of student-led innovations and the importance of capacity building for young populations

Speakers

– Jibu Elias
– Anne Rachel

Arguments

Students and marginalized communities can create global public goods when empowered with ethical frameworks and open tools


AI opportunities exist in healthcare, agriculture, and education, but require addressing infrastructure constraints and capacity building for young populations


Topics

Development | Sociocultural


Both speakers from developed countries shared experiences of successful government AI implementations with measurable benefits, emphasizing the importance of systematic approaches to AI adoption across multiple government sectors

Speakers

– Katarina de Brisis
– Jungwook Kim

Arguments

Norway has successfully implemented AI in healthcare for X-ray analysis, tax administration for fraud detection, and police transcription services, showing practical benefits


Korea’s AI adoption requires innovation in data, infrastructure, and service delivery, plus inclusion through accessibility and capability building


Topics

Economic | Development


Both speakers highlighted how AI systems often fail to serve non-Western populations effectively due to bias and lack of local data representation, emphasizing the need for locally developed and culturally appropriate AI solutions

Speakers

– Anne Rachel
– Jibu Elias

Arguments

Data scarcity and bias affect AI systems, with only 2% of African-generated data used locally and facial recognition systems performing poorly on African populations


Trust is earned through community co-creation rather than just end-user adoption, requiring locally rooted and people-centered ecosystems


Topics

Human rights principles | Development


Takeaways

Key takeaways

The OECD AI Principles Implementation Toolkit, initiated by Costa Rica, will provide practical self-assessment tools and region-specific guidance to help countries implement AI principles through co-creation workshops


Responsible AI development must be inclusive, locally-rooted, and community-centered, with marginalized communities serving as co-creators rather than just end-users


Developing countries face significant challenges including infrastructure limitations, connectivity issues (only 22% of Africans have broadband access), data scarcity, and fragmented policy frameworks


AI applications in government services show practical benefits, with Norway demonstrating success in healthcare, tax administration, and police services, while 70% of Norwegian state agencies already use AI


Trustworthy AI governance requires leadership competence, legal frameworks, employee training, and addressing higher risks in government use compared to private sector applications


International cooperation and knowledge sharing through regional workshops and platforms are essential for bridging AI divides and promoting inclusive AI ecosystems


AI implementation is a long journey with moving targets, requiring strategic investment in innovation, inclusion, and infrastructure development


Resolutions and action items

OECD will launch a comprehensive report on governing with AI and create a dedicated hub for AI in the public sector on oecd.ai


Regional co-creation workshops will be organized, starting with ASEAN countries in Thailand, followed by workshops with African, Central American, and South American countries


Norway aims for 80% of public agencies to use AI by 2025 and 100% by 2030, with investments in Norwegian language foundational models and computing infrastructure


Norway will implement the EU AI Act and establish AI Norway as an arena for sharing experience and regulatory sandbox testing


OECD will conduct a global data collection exercise on AI policies and use cases to be presented through the OECD AI Policy Observatory


Unresolved issues

Many AI use cases remain at piloting stage with governments struggling to scale pilots into wider systems or services


Governments need better tools and methodologies to assess the costs and benefits of AI implementation in the public sector


Inadequate data, skills, and infrastructure in the public sector continue to constrain AI adoption


The need for more actionable guidelines and navigation of rigid regulatory environments remains challenging


Capacity building and workforce development cannot keep pace with the rapid advancement of AI technology


Data bias issues persist, with facial recognition systems performing poorly on African populations and only 2% of African-generated data being used locally


Suggested compromises

Taking time to develop context-appropriate solutions rather than rushing implementation without proper understanding of local needs


Balancing ambitious AI adoption goals with the need for proper training, legal frameworks, and safety measures


Using modular and flexible guidance approaches that can adapt to different resource settings and local contexts


Combining international best practices with local innovation and community-led initiatives


Establishing public-private partnerships to share the burden of AI development and implementation costs


Thought provoking comments

Even a country like Costa Rica, politically stable, technically skilled and internationally connected, face these challenges, then surely other countries like us will too face that challenge.

Speaker

Marlon Avalos


Reason

This comment was particularly insightful because it reframed the AI development challenge from a Global South perspective. Rather than positioning Costa Rica as disadvantaged, Avalos acknowledged their relative strengths while emphasizing that if even well-positioned countries struggle, the challenges are systemic rather than just resource-based. This created a foundation for genuine international collaboration rather than a donor-recipient dynamic.


Impact

This comment established the legitimacy and urgency of the OECD AI Principles Implementation Toolkit initiative. It shifted the discussion from theoretical policy frameworks to practical, experience-based solutions and set the tone for other speakers to share their ground-level challenges and innovations.


Don’t just ask who builds AI, ask whose future is it building? Because in countries like ours, trust is not a given, it’s earned. And when communities are trusted as co-creators, not just end users, they don’t just adopt technology, they transform it.

Speaker

Jibu Elias


Reason

This comment was profoundly thought-provoking because it challenged the fundamental approach to AI development and deployment. It shifted focus from technical capabilities to human agency and democratic participation in technology design. The distinction between ‘end users’ and ‘co-creators’ reframes the entire AI governance conversation around empowerment rather than consumption.


Impact

This comment elevated the entire discussion by introducing a philosophical framework that connected all subsequent speakers’ examples. It provided a lens through which the audience could evaluate all AI initiatives – whether they truly involve communities as co-creators or merely as beneficiaries.


We do say Europeans have watches, we have time. So I’m just saying this to plead for, you know, taking the time to do things, because rushing into doing things that are not geared to the context just keeps us behind more than anything, because people do not understand what it is we’re trying to do or where is it that we’re trying to get to.

Speaker

Anne Rachel Ng


Reason

This culturally grounded metaphor was exceptionally insightful because it challenged the prevailing narrative of ‘catching up’ in AI development. It reframed the perceived disadvantage of slower adoption as potentially advantageous, emphasizing that contextual appropriateness and community understanding are more valuable than speed. This perspective counters the technology determinism often present in AI discussions.


Impact

This comment provided a powerful counter-narrative to the urgency often associated with AI adoption. It influenced the discussion by validating deliberate, community-centered approaches and gave other speakers permission to discuss the importance of local context and inclusive processes over rapid deployment.


We found that some functions face particular barriers or complexities, such as particularly stricter rules on data access and sharing, and then stricter requirements for thorough audit trails in public integrity.

Speaker

Seong Ju Park


Reason

This observation was insightful because it revealed that the uneven distribution of AI use cases in government isn’t just about technical capacity or resources, but about institutional and regulatory complexity. It highlighted how governance structures themselves can create barriers to AI adoption, suggesting that policy reform may be as important as technical development.


Impact

This comment shifted the second segment’s focus from success stories to implementation challenges, preparing the ground for more nuanced discussions about the barriers governments face and the need for adaptive governance frameworks.


AI has changed many aspects of our lives, how we communicate, how we seek information. And this is affecting governments as well. This is accelerating digital transformation of public sector, changing how governments work, how government design and deliver policies and services. And it also changed the expectations and needs of the citizens and businesses that they serve.

Speaker

Seong Ju Park


Reason

This comment was thought-provoking because it positioned AI not just as a tool for government efficiency, but as a transformative force that changes the fundamental relationship between governments and citizens. It suggested that AI adoption creates new expectations and needs, implying that governments must evolve not just their tools but their entire approach to public service.


Impact

This framing influenced the entire second segment by establishing that AI in government isn’t just about automation or efficiency gains, but about fundamental transformation of governance relationships. It set up the subsequent discussions about trust, accountability, and citizen engagement.


So it’s moving targets. Then we need agile measures to take care of the AI safety issues… those ones should be narrated clearly in the AI safety and governance in one specific country.

Speaker

Jungwook Kim


Reason

This comment was insightful because it acknowledged the fundamental challenge of governing rapidly evolving technology while emphasizing the need for country-specific approaches. The ‘moving targets’ metaphor captured the dynamic nature of AI governance challenges, while the emphasis on national narratives recognized that governance solutions must be culturally and institutionally grounded.


Impact

This comment reinforced the toolkit approach discussed in the first segment by validating the need for flexible, adaptive governance frameworks rather than one-size-fits-all solutions. It connected the theoretical framework discussions with practical implementation challenges.


Overall assessment

These key comments fundamentally shaped the discussion by challenging conventional narratives about AI development and governance. Rather than focusing solely on technical capabilities or resource gaps, the speakers introduced themes of community agency, cultural context, institutional complexity, and adaptive governance. The comments created a progression from recognizing shared challenges (Avalos) to reimagining development approaches (Jibu, Anne Rachel) to understanding implementation complexities (Park, Kim). This elevated the conversation beyond typical policy discussions to address fundamental questions about power, participation, and the purpose of AI in society. The speakers’ insights collectively argued for a more democratic, contextual, and deliberate approach to AI governance that prioritizes community needs and local contexts over rapid technological adoption.


Follow-up questions

How can we better measure the cost and benefits of AI implementation in the public sector?

Speaker

Katarina de Brisis


Explanation

Many governments struggle to make business cases for scaling up AI efforts due to unknown costs and benefits, making it difficult for policymakers to justify investments


How can we develop better tools and methodologies to assess benefits from AI across various sectors and government levels?

Speaker

Katarina de Brisis


Explanation

While there are documented cases of AI benefits, there’s a need for systematic methodological frameworks to evaluate AI impact across different government functions


How can governments scale AI pilots into wider systems and services?

Speaker

Seong Ju Park


Explanation

Many AI use cases in government remain at piloting stage and struggle to scale up, representing a significant implementation challenge


How can we ensure AI systems work effectively for diverse populations, particularly addressing bias in facial recognition and medical devices for people of different ethnicities?

Speaker

Anne Rachel Ng


Explanation

Current AI systems often perform poorly on African populations due to training on non-representative data, as demonstrated by the oximeter example during COVID-19


How can we develop more actionable guidelines for AI implementation in government?

Speaker

Seong Ju Park


Explanation

There’s a large room for improvement in providing practical, implementable guidance rather than high-level principles


How can we address the infrastructure challenges, particularly for landlocked countries with limited broadband access?

Speaker

Anne Rachel Ng


Explanation

Only 22% of Africans have broadband access, and 16 African countries are landlocked, creating significant connectivity barriers for AI adoption


How can we better coordinate cross-ministerial collaboration for AI policy implementation?

Speaker

Anne Rachel Ng


Explanation

AI implementation requires coordination across multiple government ministries (finance, interior, defense, data protection) but this coordination is often lacking


How can we develop AI governance frameworks that are agile enough to keep pace with rapidly evolving AI technology?

Speaker

Jungwook Kim


Explanation

AI is a moving target requiring real-time and proactive governance measures, but current governance structures may not be agile enough


How can we ensure inclusive AI development that truly involves marginalized communities as co-creators rather than just end users?

Speaker

Jibu Elias


Explanation

Trust in AI systems requires involving communities in the development process, not just as recipients of the technology


How can we address the capacity building challenge when the pace of AI development exceeds the speed at which human capacity can be developed?

Speaker

Anne Rachel Ng


Explanation

With very young populations in developing countries, there’s a mismatch between the speed of AI advancement and the time needed to build adequate workforce capacity


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #7 Advancing Data Governance Together Across Regions

Open Forum #7 Advancing Data Governance Together Across Regions

Session at a glance

Summary

This discussion focused on advancing data governance across regions, bringing together policymakers and civil society leaders from West Africa, Eastern Partnership, and Western Balkans to explore common challenges and share best practices. The session was moderated by Wairagala Wakabi from CIPESA and hosted by Dr. Ismaila Ceesay, Minister of Information from The Gambia, who outlined his country’s comprehensive digital transformation strategy including national data protection policies and alignment with ECOWAS and African Union frameworks.


Commissioner Milan Marinovic from Serbia emphasized the critical balance between digital advancement and personal data protection, proposing the creation of a global e-association of data protection authorities to facilitate international cooperation. He stressed that digitalization and data protection must develop in parallel, comparing their relationship to natural complementary forces. Regional experts highlighted varying approaches across different areas, with Folake Olagunju from ECOWAS describing West Africa’s focus on harmonization without homogenization, emphasizing multi-stakeholder engagement and evidence-based policymaking.


Dr. Olga Kyryliuk from Southeastern Europe described her region as having “high digital ambition” but facing challenges due to regulatory divides between EU member states and non-EU countries seeking accession. Civil society representatives from Armenia and Kyrgyzstan shared their experiences with digital transformation, emphasizing the importance of civic tech voices in building trust and ensuring inclusive governance. A recurring theme throughout the discussion was the need for harmonization of legal frameworks while respecting national sovereignty and cultural differences.


The panelists identified several practical next steps for strengthening inter-regional cooperation, including establishing continental data governance frameworks, creating controlled test environments for interoperable platforms, and developing formal cooperation channels between data protection agencies. The discussion concluded with emphasis on the critical importance of building cross-border trust, ensuring transparent oversight, and balancing multiple human rights including privacy and freedom of information in the evolving digital landscape.


Keypoints

## Major Discussion Points:


– **National Data Governance Framework Development**: Countries across different regions (The Gambia, Serbia, Armenia, Kyrgyzstan) are actively developing comprehensive national data protection and governance frameworks, with many aligning their legislation to GDPR standards and regional frameworks like ECOWAS and African Union policies.


– **Regional Harmonization vs. National Sovereignty**: A central tension emerged around balancing the need for harmonized cross-border data governance standards while preserving national data sovereignty, with speakers emphasizing “harmonization not homogenization” and the importance of mutual recognition frameworks.


– **Cross-Border Data Protection Authority Cooperation**: Significant focus on strengthening cooperation between Data Protection Authorities (DPAs) globally, including proposals for new international associations and formal cooperation channels for audits, incident response, and enforcement coordination.


– **Multi-Stakeholder Engagement and Civil Society Role**: Strong emphasis on the critical importance of involving civil society, private sector, academia, and citizens in data governance processes, with civic tech organizations serving as essential bridges between governments and citizens to ensure transparency and accountability.


– **Balancing Human Rights in Data Governance**: Discussion of the complex challenge of protecting privacy rights while preserving freedom of information and expression, with several countries adopting integrated approaches that combine data protection and access to information oversight under unified commissions.


## Overall Purpose:


The discussion aimed to foster inter-regional dialogue on data governance best practices, challenges, and cooperation mechanisms between policymakers and civil society leaders from West Africa, Eastern Partnership, Western Balkans, and other regions. The session sought to identify common approaches for building digital cooperation, sharing lessons learned, and developing actionable steps for strengthening international collaboration on data governance standards and frameworks.


## Overall Tone:


The discussion maintained a consistently collaborative and constructive tone throughout. Speakers demonstrated mutual respect and genuine interest in learning from each other’s experiences. The tone was professional yet accessible, with participants openly sharing both successes and challenges. There was a notable spirit of cooperation, with multiple speakers building upon each other’s ideas and offering concrete proposals for future collaboration. The atmosphere became increasingly solution-oriented as the session progressed, culminating in specific actionable recommendations and offers for continued partnership between regions and organizations.


Speakers

**Speakers from the provided list:**


– **Wairagala Wakabi** – Executive Director of CIPESA (Collaboration on International ICT Policy for Eastern and Southern Africa), Session Moderator


– **Dr. Ismaila Ceesay** – Minister of Information from The Gambia


– **Milan Marinovic** – Commissioner for Access to Public Information of Importance and Personal Data Protection of Serbia (appointed in 2019), former judge


– **Olga Kyryliuk** – Chair of the South Eastern European IGF, expert in digital governance, Internet freedom and international law


– **Meri Sheroyan** – Co-founder of Digital Armenia NGO, IT expert specializing in digital transformation in the public sector


– **Tattugal Mambetalieva** – Director of Civil Initiative on Internet Policy (Kyrgyzstan), initiator and founder of Kyrgyz Forum on Information Technology and Central Asian Forum on Internet Governance


– **Folake Olagunju** – Acting Director of Digital Economy at the Economic Committee of West African States Commission (participated online)


– **Audience** – Multiple audience members who asked questions during the session


**Additional speakers:**


None identified beyond those in the provided speakers names list.


Full session report

# Inter-Regional Data Governance Dialogue: Sharing Experiences and Building Cooperation


## Session Overview


This inter-regional dialogue brought together policymakers and civil society representatives from West Africa, Eastern Partnership, Western Balkans, and Central Asia to discuss data governance challenges and share regional experiences. The session was moderated by Wairagala Wakabi from CIPESA and hosted by Dr. Ismaila Ceesay, Minister of Information and Communication Infrastructure from The Gambia.


## Opening Remarks


Dr. Ismaila Ceesay welcomed participants and outlined The Gambia’s digital transformation priorities, emphasizing a whole-of-government approach to data governance. He highlighted three key areas: institutional capacity building, legal reforms including the Data Protection Bill 2023 currently in parliament, and statistical system reform. Dr. Ceesay acknowledged significant challenges including capacity gaps, institutional fragmentation, and digital divide issues affecting rural populations.


Moderator Wairagala Wakabi structured the discussion around key questions about regional approaches to data governance, cross-border cooperation mechanisms, and practical steps for advancing inter-regional collaboration.


## National and Regional Perspectives


### ECOWAS Regional Framework


Folake Olagunju from ECOWAS described West Africa’s approach as “harmonisation not homogenisation,” explaining that ECOWAS revised its Supplementary Act on Personal Data Protection to support cross-border data flows while respecting individual country contexts. She emphasized a “whole-of-society” methodology involving government, civil society, private sector, academia, and citizens in policy development processes.


### Serbia’s Institutional Model


Commissioner Milan Marinovic from Serbia’s Commissioner for Information of Public Importance and Personal Data Protection described data protection as “one of the most threatened fundamental human rights in today’s era of rapid development of modern technologies.” He proposed creating a global E-association of Data Protection Authorities (DPAs) and highlighted Serbia’s “two-in-one system” that combines data protection and access to information oversight under a unified commission.


### Armenia’s Civil Society Perspective


Meri Sheroyan from Digital Armenia emphasized the role of civic tech organizations as bridges between governments and citizens. She described Armenia’s efforts to build comprehensive legal and technical frameworks for digital transformation, including e-governance platforms and data governance projects, while highlighting the importance of civil society in building public trust.


### Kyrgyzstan’s Distinctive Approach


Tattugal Mambetalieva from Kyrgyzstan explained that her country deliberately avoids data centralization and localization, stating that “centralization of data has risks for data protection and localization of data creates additional burden to business.” This approach differs from neighboring countries like Kazakhstan and Uzbekistan, demonstrating diverse policy choices within the region.


### Southeastern European Coordination


Dr. Olga Kyryliuk, Chair of the South Eastern European IGF, described her region as having “high digital ambition” but facing challenges due to regulatory differences between EU member states operating under GDPR and non-EU countries still seeking compliance. She emphasized the role of Internet Governance Forums in facilitating dialogue across different regulatory environments.


## Key Themes and Common Challenges


### Harmonization While Respecting Sovereignty


Multiple speakers emphasized the importance of regional cooperation that creates interoperability without imposing identical solutions. The ECOWAS model of harmonization rather than homogenization was cited as an example of balancing common standards with national sovereignty.


### Multi-Stakeholder Engagement


All participants stressed the importance of inclusive stakeholder engagement, though with different emphases. While The Gambia focused on whole-of-government approaches, ECOWAS emphasized whole-of-society participation, and Armenia highlighted the critical role of civil society organizations.


### Capacity Building Needs


Speakers from all regions identified capacity building and institutional strengthening as persistent challenges requiring sustained attention and resources.


### Balancing Rights and Innovation


Participants discussed the need to balance data protection with other rights including freedom of information and expression, as well as supporting digital innovation and economic development.


## Audience Engagement


The session included questions from the audience, including an inquiry about the SOLID protocol and linguistic AI in the context of indigenous language preservation. Dr. Ceesay acknowledged the complexity of language preservation in Africa, noting the continent has over 2,000 languages with some countries having 56 to 200 languages each.


## Action Items and Commitments


In their final one-minute responses, panelists made specific commitments:


– **Commissioner Marinovic** committed to contacting DPAs worldwide within the week to propose his E-association concept


– **Dr. Ceesay** committed to finalizing The Gambia’s data protection legislation by year-end and implementing the planned merger of access to information and data protection oversight functions


– **Folake Olagunju** outlined ECOWAS plans to establish controlled test environments for member states to trial interoperable platforms in sectors such as health, education, and identity systems


– **Dr. Kyryliuk** offered to host a side meeting during CDIG’s October meeting in Athens to advance inter-regional dialogue


– **Meri Sheroyan** emphasized continuing to pilot small-scale cross-border data-sharing initiatives in specific sectors


– **Tattugal Mambetalieva** highlighted the need for intergovernmental agreements on data exchange in Central Asia


## Key Takeaways


The dialogue demonstrated both shared challenges and diverse approaches to data governance across regions. While all participants agreed on fundamental principles such as the importance of multi-stakeholder engagement and the need to balance various rights and interests, their implementation strategies reflect different regional contexts and priorities.


The session highlighted the value of inter-regional dialogue for sharing experiences and identifying potential areas for cooperation, while respecting the diversity of approaches needed to address local contexts and constraints. The concrete commitments made by participants suggest potential for continued collaboration and mutual learning across regions.


The discussion reinforced that effective data governance requires not only technical and legal frameworks but also sustained institutional capacity building, inclusive stakeholder engagement, and mechanisms for regional cooperation that respect national sovereignty while enabling cross-border collaboration.


Session transcript

Wairagala Wakabi: Hello, good afternoon, dear audience, it is my pleasure to moderate this session today, and I’ll begin by introducing myself. My name is Wakabi and I am the Executive Director of CIPESA, which is the Collaboration on International ICT Policy for Eastern and Southern Africa, a think tank that works on issues at the intersection of technology, human rights, governance, and livelihoods. Today, we are bringing together notable speakers from across various regions to discuss data governance in line with the IGF sub-theme of building digital cooperation. The session aims to contribute to inter-regional dialogue among policymakers and civil society leaders from West Africa, from the Eastern Partnership, and the Western Balkans to leverage common knowledge. I am from East Africa myself, which wasn’t mentioned among those regions, so there’s also some insights that will come out of there. On this note, to kick us off, I would like to invite our host, who is the Minister of Information from The Gambia, to share his welcome remarks. Dr. Ismail Asise, please take the floor.


Dr. Ismaila Ceesay: Thank you very much, Dr. Wakabi, thank you for that introduction. Excellencies, distinguished delegates, ladies and gentlemen, it is a great honor to join you today for this very important discussion on advancing data governance across regions as we collectively seek pathways. In an increasingly digital world, data is a critical enabler of development, innovation and rights. For The Gambia, harnessing data responsibly is key to driving economic growth, improving service delivery, and protecting the dignity and rights of our people. The Gambia is embracing the digital age with ambition and purpose. We recognize that digital transformation is not just a matter of technological advancement. For us, it is a catalyst for inclusive growth, innovation and good governance. Our national broadband policy, our digital ID initiatives and e-government platforms are all part of a comprehensive strategy to bridge the digital divide, empower citizens and modernize our economy. We have made significant strides in putting data governance at the core of our digital development agenda. We are currently implementing our national data protection and privacy policy, grounded in principles of accountability, transparency and human rights. Steps are also underway to establish an independent data protection authority, which will oversee the enforcement of data governance principles and build trust with citizens, businesses and regional partners. We are also committed to finishing the development of the Gambian national data governance policy, supported by the African Union and the European Union. The Gambia is actively engaged in regional frameworks on the ECOWAS and the African Union, including alignment with the EU data policy framework. We recognize interoperability, regulatory harmonization and mutual trust are essential for effective cross-border data flows in Africa and beyond. We believe that effective cross-border data governance can unlock tremendous value, facilitating trade, strengthening regional integration and enabling secure data flows across borders. We believe that International Cooperation must be fair, inclusive and development-oriented. We are fully aware that no country can do this alone. Advancing data governance across borders requires trust, coordination and shared values. As countries in the Global South, we seek equitable participation in shaping global digital rules, and we emphasize the need for capacity support, infrastructure investments and data governance models that reflect our local realities. The Gambia stands ready to work with partners on the continent and globally to build a data governance ecosystem that is secure, rights-respecting and fit for the digital age. Let us advance together, bridging borders and building trust in the digital world. Thank you.


Wairagala Wakabi: Thank you, sir, for outlining the Gambia’s efforts in its digital development agenda and also outlining its commitment to cooperative data governance. As Dr. Sisay has touched on, the domestic as well as cross-border assisted governed data effectively is crucial, and so to explore common challenges and valuable experiences from different regions, we are going to hear from our panelists and dive deeper into the varying contexts that can enable us to be able to accelerate responsible, future-ready and rights-based data governance globally. I will therefore introduce our panelists today, beginning next to Dr. Sisay, Milan Marinovic, who was appointed Commissioner for Access to Public Information of Importance and Personal Data Protection of Serbia in 2019. Previously, Mr. Marinovic served as judge in different courts. He has authored various publications and participated in various working groups, drafting and amending legislation in Serbia. Next to him, we have Dr. Olga Kiriliuk, who currently serves as chair of the South Eastern European IGF, leading multi-scope cooperation across 18 countries in the region. She’s internationally recognized as an expert in digital governance, Internet freedom and international law with over 12 years of experience at the intersection of technology, policy and human rights. And to my left, we have Meri Sheroyan, the co-founder of Digital Armenia, an NGO focused on advancing digital transformation through inclusive, user-centered approaches. As an IT expert, she is specializing in digital transformation in the public sector and public administration systems. And she has extensive experience working within government institutions as well as with the development institutions. To the extreme left, we have Tatu Mambetalieva, the Director of Civil Initiative on Internet Policy based in Kyrgyzstan. She’s also the initiator and founder of the public platform Kyrgyz Forum on Information Technology, the annual Central Asian Forum on Internet Governance, which is a regional initiative of the Global Internet Governance Forum created under the auspices of the UN. We also have a participant online who has not been able to join us, and that is Folake Olagunju, the Acting Director of Digital Economy and post at the Economic Committee of West African States Commission, where she leads the Digitalization Directorate. We will now hear from our panelists and set the stage and get a sense of the state of data governance in The Gambia and Serbia. We’ll start with Dr. Sese first. As The Gambia continues to develop its digital infrastructure and data policies, where are the country’s priorities and challenges in developing and implementing effective national data governance frameworks and how they align with the broader strategies of the African Union and the ECOS?


Dr. Ismaila Ceesay: Thank you very much once again, Mr. Moderator. As for our priorities, our number one priority is institutional capacity building. Now, as The Gambia is advancing the development of a comprehensive national data… Digital Governance Framework to support digital government, evidence-based policymaking, and public service delivery. This initiative is supported by UNDESA and includes a series of stakeholder consultations and capacity-building workshops led by the Ministry of Communication and Digital Economy of The Gambia. Our other priorities also focus on legal and regulatory reforms. For example, we have the data protection and privacy legislation, which is currently in parliament. This is building on the National Data Protection and Privacy Policy of 2019. The Gambia has formulated the Data Protection and Privacy Bill 2023, which is currently before the National Assembly. The bill provides a robust legal framework covering data subject rights, controller and processor responsibilities, transborder data flows, processing principles, safeguards, enforcement mechanisms, and sanctions. Under these reforms, we also have the statistical system reform. This is under the National Strategy for the Development of Statistics. The 2025 Statistics Act is being revised to strengthen coordination across the national statistical system. This reform aligns with the National Development Plan 2023-2027, Agenda 2063, the ECOWAS Regional Statistical Strategy, and the UN SDGs. We also have the national data policy reforms, with support from GIZ and UNDESA. The national data policy has been validated and is pending cabinet submission. It aims to harmonize data governance across sectors and establish a foundation for secure, inclusive, and rights-based use. Another priority is the whole-of-government approach. The MOCDE, which is the Ministry Responsible for Digital Economy, is spearheading cross-sectoral coordination to ensure that data governance is embedded across ministries, departments, and agencies. Once adopted, the policy will address data protection, cyber security, open data, and access to information, while balancing freedom of expression with the mitigation of online The National Data Policy is a cornerstone of the Gambia’s broader digital transformation agenda, aligning with the Digital Transformation Strategy 2024-2028, Digital Economy Master Plan 2024-2034, and Government Open Data Strategy 2024-2027. It supports the NDP, SDGs, and Agenda 2063 by promoting data availability, accessibility, and interoperability to drive innovation, transparency, and inclusive development. Our challenges, particularly the persistent ones, one is capacity gaps. Many ministries, departments, and agencies lack the technical and analytical capabilities to manage and utilize data effectively. A second challenge is fragmentation. The national data ecosystem remains siloed with inconsistent standards for data collection, storage, and sharing. Another challenge we are facing is the digital divide, inequities in digital access and literacy, particularly across rural and undeserved populations. This limits inclusive participation in data-driven governance. And finally, our alignments with EU and ECOWAS strategies. The Gambia’s data governance reforms are closely aligned with the African Union’s data policy framework, which emphasizes data sovereignty, cross-border data flows, and inclusive digital economies. At the regional level, the Gambia is also actively engaged in the ECOWAS Supplementary Act on Personal Data Protection, which is expected to be endorsed by heads of state in the upcoming summit. These efforts underscore the Gambia’s commitment to regional harmonization and digital trust.


Wairagala Wakabi: Thank you very much. That’s a handful of measures that have been implemented to advance data governance, in spite of the challenges, and it would be good here if the challenges are also shared across regions. But I have a follow-up question. The Gambia also recently launched a five-year strategic plan to strengthen good governance. Its pillars include to to improve transparency and access to information to boost public participation and strengthen institutional capacity and good governance. Could you please describe to us what is the role of the Ministry of Information that you lead in building public trust around the governance?


Dr. Ismaila Ceesay: While the Ministry of Digital Economy leads on technical and regulatory aspects, the Ministry of Information, which I lead, plays a critical role in fostering public trust and civic engagement. One of the things we do, and which is our mandate, is public awareness and digital literacy activities. The Ministry is responsible for sensitizing citizens on their data rights, the value of open data, and the safeguards in place to protect personal information. This includes campaigns to demystify data governance and promote responsible digital citizenship. Our initiative and activities we do focus on transparency and access to information. As a key pillar of the 2025-2029 strategic plan, the Ministry is expected to champion proactive disclosure of government-held data, thereby reinforcing transparency and accountability in public institutions. We also engage in media engagement and narrative framing. By collaborating with public and private media, the Ministry also shapes inclusive narratives that build confidence in digital reforms, counter misinformation and disinformation, and promote calm and stability during periods of digital transition. And finally, we also engage in stakeholder dialogue and inclusion. The Ministry serves as a bridge between government, civil society, and the public, facilitating participatory dialogue to ensure that data governance policies reflect citizen concerns and uphold democratic values.


Wairagala Wakabi: Thank you very much. We’ll hear now from Commissioner Marinovic of Serbia, which has equally made significant progress in developing a rights-based data governance framework with a particular emphasis on the protection of personal data. Commissioner, what have been the recent institutional challenges of balancing compatibility between digital and data systems with the protection of fundamental rights?


Milan Marinovic: Thank you, Mr. Vakabi. Dear all, greetings from Serbia to everyone. First of all, I want to thank GIZ for the invitation to participate in such an important event. Also, with GIZ support, we plan to raise capacities of policy makers and other policy makers and IT experts in the field of data privacy in Serbia. In the early beginning, let me share with you one of my experiences. Every time I find myself at such a large and important event dedicated to digitalization and the use of modern technologies, I, as someone who deals with the protection of personal data, feel like a cat at a dog’s exhibition. It is an extraordinary pleasure and honor, but also a responsibility to be with you today at this fantastic forum. Protection of personal data, as well as the right to privacy in general, is one of the most threatened fundamental human rights in today’s era of rapid development of modern technologies, widespread digitalization and enormous use of artificial intelligence. That is why it is extremely difficult to find the appropriate balance between digital and data systems and the protection of personal data. Difficult, but not impossible. What is most important in creating that balance? Parallel, balanced development of both sides of the same story. This means that the accelerated development of digitalization in all areas of life must be accompanied by the development of personal data protection systems. Digitalization in general, and artificial intelligence in particular, cannot exist without data processing, especially personal data. They feed and depend on data. The processing of data is certainly necessary and useful, and it will be more and more in the future. But as the processing of personal data grows, so must grow protection of this data. Just as a day cannot exist without night, summer without winter, so the processing of personal data cannot exist without its protection. There is a strong link between the processing and protection of personal data. This implies many things, of which I will mention only those which, in my opinion, are the most important. First, strengthening the system and the measures for the protection of personal data. Second, strengthening data protection authorities around the world. Third, strengthening cooperation and collaboration between data protection authorities from all over the world. Fourth, establishing and strengthening the communication and cooperation of the regulatory bodies with the most important controllers and processors of personal data, such as big tech companies and social networks. And fifth, last but not the least, raising the level of awareness of citizens about the importance of personal data protection.


Wairagala Wakabi: Thank you so much, Commissioner. I think all DPAs and many of us are always grappling with best ways in which we can be able to balance those two elements. And you’ve said parallel balanced development of… both is the key. But you also mentioned the issue of a deeper cooperation between DPAs in different countries. In your role, where you sit, what kind of cross-border and inter-regional cooperation is happening between different data protection authorities?


Milan Marinovic: Speaking of cross-border and inter-regional cooperation between data protection authorities, I would like to take this unique opportunity to introduce to you an initiative that I promoted this spring at the Privacy Symposium in Venice. My idea is to form an association of DPAs named E-association of DPAs from all over the world on a global level in an online format. I call this future association E-association of DPAs and my idea is that all regulators, regardless of their status in the country they are from, have the opportunity to exchange practices in the field of personal data protection, to exchange their experience, provide mutual legal assistance and solve common problems in a simple, easy and efficient way on bilateral and multilateral level. As a first step in the realization of this idea, I plan next week to send to all DPAs in the world email in which I will explain the idea of creating an association and ask them did they support this idea and if they would like to be members of the future association. Depending on the answer, the activities we will undertake will also depend.


Wairagala Wakabi: Thank you very much. Great initiative. We hope you will be also partnering and associating with other actors, academia, civil society, etc. and they will not feel like cats and dogs exhibitions. No, I hope so. So thank you our distinguished speakers for those valuable insights into national approaches to foster regulated and inclusive data governance with many lessons learned and a couple of common challenges. We would now like to invite our regional experts to contribute to this discussion by bringing their experience from West Africa and Southeastern Europe. We are going to begin with Folake Olagunju who is online but was introduced. In the region, Folake, there is a lack of reliable data and this can hamper evidence-based policy making that is necessary for well-founded decision making. How is the economic community for West Africa contributing norm-setting and coordination among its members facilitate cross-border data flow and what lessons can be shared with other regional blocks that are willing to follow suit?


Folake Olagunju: Thank you very much Wakani for giving me the floor and I must apologize for the noise. I’m at a conference center so it’s a bit hectic here. Very valid point. We do know that data is something we all struggle with. It’s not just a West African issue. But for us at the ECWAS Commission, we’re looking to ensure that all the policy making we actually do is anchored based on an evidence-based approach. How do we do this? We try and prioritize the data that we get and ensure that there’s inclusive engagement. We always ensure there are many… Member States are right with us from the very beginning all the way to the end. It was interesting that the Minister from The Gambia spoke about the Supplementary Act on Data Protection within West Africa. That is something we’ve just revised and we’re trying to ensure that it is adopted. That process actually went through from Member States all the way through to the Council of Ministers. But before we did that, we actually made sure we do studies with different stakeholder groups across West Africa. So you’ve got your civil society, you’ve got your private sector, every voice matters. Because when you talk about data, it involves every single person. So it’s not just about a whole of government. I understand why The Gambia is doing a whole of government, but for us at the regional perspective, we’re looking at a whole of society because this is absolutely vital. Now one of the things we’ve done with the revision of the Supplementary Act for the Data Protection within ECOWAS is to look at how we can support cross-border data flow. And this is inter-, intra- and across-borders because this is very, very important. It’s about harmonisation at the regional level, but not homogenisation. So yes, we need to harmonise because we’re a regional bloc, we have similarities, but then it needs to be homogeneous in a certain extent so that it’s tailored to the different nuances of each member country. Stakeholder consultation remains absolutely key, and it’s at the cornerstone of everything that we do at the ECOWAS Commission. We need to ensure that whatever we do is data-driven, and decisions need to have inclusive research, we need to ensure we’ve got academia, we need to ensure civil society for accountability, we need to ensure private sector because they bring the money to the table. We need governments because they are the ones that would actually operationalise whatever it is we do at the regional level. We’re also trying to ensure that what we do aligns with the continental frameworks that we have. The Minister spoke about… not just Malaga Convention, but he also spoke about the ADPF. We look at continental frameworks as well. We’re not working in silos. We ensure that what we do is actually of value to our member states, but also puts them in a right position to be able to actually interact with other regions, like you’ve rightly said, Comestas, SADC, and globally across. We’re looking to align all our standards as well, because this is absolutely very important. So that’s what we’re doing at the moment in terms of harmonization, ensuring that we have evidence, frameworks that are backed up with evidence. Like you rightly said, again, data, not easy to find, but I think if you’re able to actually include people across, what’s the word I’m looking, a plethora of people in the process, you will actually see that at the end of the day, you get that buy-in, and hopefully operationalization becomes a dot. Thank you.


Wairagala Wakabi: Thank you very much. And as a follow-up, how does ECOWAS support the creation of favorable conditions for data governance in the region, and what stakeholders does it take to effectively implement the strategies?


Folake Olagunju: That’s an interesting question. So one of the things we’re looking to do at the moment is actually have a regional instrument in place that will talk about open data. Now, why do we need open data? We’re trying to ensure that all the frameworks that we put in place at the regional level will do three things. Encourage transparency, promote interoperability, because that is absolutely key, and last but not least, but I think the most vital, is responsible data sharing. So data is only as good as who has it and who is willing to share it and how it’s used. So we’re doing that at the regional level. We’re also looking at certain data priorities in the digital sector development strategy that we’ve got, and this is over five years. What we’re trying to do is to ensure that we can define sensitive and non-sensitive data categories for our member countries. What we find is when you ask someone to share data, they’re a bit reluctant because they don’t know which one needs to be, which data needs to be sovereign and which data can be shared. And I think if we’re able to actually elaborate a little bit more on this, this will actually help. Also, we’re looking at technical and infrastructural standards. I know the Honorable Minister from The Gambia mentioned connectivity. That is something we’re also looking at because without connectivity, how do you even begin to share data or even have the conversations that would allow you to, you know, get data and use data? We’re looking at how we can help member countries transform from a more, I don’t want to say analog government to a more interactive government. So we’re looking at quite a number of member countries have static information portals. So we’re trying to see how we can actually elevate those portals so that they become more interactive for their member countries. And it will actually bring more data and it will actually encourage innovation. Because if you’ve got data, you can also innovate. Like I said earlier on, it has to be across, across what’s the word I’m looking at, multi-stakeholder where the IGF, multi-stakeholder collaboration. So we need private sector, private sector are the big guns. They will actually help us build our data driven solutions. We need governments and ICT regulators to actually adapt and adopt these regulations that we’re putting in place and show that they’re domesticated at the national level. We need academia. They’re the ones that will tell us what we need to be looking at two, three years from now. Last but not least, we need our partners. We can’t do it without them. It’s not always about reinventing the wheel. You can actually take what has been done in a different region, bring it here and tailor it to the new nuances of West Africa. And then I want to say we definitely cannot do it without the citizens. If the citizens don’t use data, if the citizens don’t understand the need for data or the citizens.


Wairagala Wakabi: Thank you so much, Folake. Much appreciated. That’s what’s happening in Western Africa. So let’s move on and hear from Southeastern Europe. Olga, that region navigates national data ecosystems with broader regional dynamics. How would you describe the current state of data governance in the region? What are the most prominent dynamics within the region? Thank you for the question. When talking about my region, I like to describe Southeastern


Olga Kyryliuk: Europe as a region with high digital ambition. Also, what makes the region truly unique is that it remains divided between the countries that are operating under the EU regulatory framework such as GDPR, for example, Croatia, and the countries who are still in the process of securing full institutional and legal compliance, such as North Macedonia. This regulatory divide has real consequences, especially when it comes to cross-border trust and data sharing. While the EU member states are benefiting from structured oversight and shared enforcement mechanisms, for the neighboring non-EU countries, even those whose laws quite closely mirror the EU standards, it is often still a challenge because very often they are still considered as third countries in terms of data protection guarantees and safeguards. This status itself introduces friction into the data flows, especially when it comes to public health, education, and digital services where cooperation is supposed to be seamless and smooth. As you can see, the region is caught in between fragmentation and convergence. Fragmentation still defines the legal space, the institutional capacity, and the technical infrastructure. There is also a growing convergence of ambition. We know that there are almost all countries in the region who are having either the EU accession ambition or they are trying to integrate into the global digital markets. This is why they are trying to take the example of the European Union and to standardize and harmonize their laws and their enforcement practices in the sphere of data protection and data governance with the European Union model. This moment also presents both a challenge and an opportunity for our region. When we talk about the challenge, this usually comes to bridging the digital-legal divide which stalls the cooperation. So it’s really very important to ensure that the legal frameworks really talk to each other and there are no major discrepancies. But there is also the opportunity which lies in building shared regional trust frameworks which go beyond the simple compliance mechanisms. I think so far our region is doing quite a good job in trying to adopt the legal frameworks which are according to the best safeguarding practices in terms of data governance and data protection. There is of course quite a long way to go for some countries compared to others because, as I said, the region is not uniform but this is also what is making the region unique and an interesting example for sharing the practices and the case studies with other regions in the world.


Wairagala Wakabi: Thank you very much. I hear a couple of similarities from your region, Eastern Europe and Western Africa. Issues around harmonization and compliance mechanisms, issues around interoperability We are at the IGF, so we cannot not ask about the role of the IGF. Where you sit, you have the regional IGF CDIG. How is it contributing to harmonizing data governance frameworks? Have there been any successful models from the region that I imagine that could serve as a template for others?


Olga Kyryliuk: I believe that IGFs and CDIG in particular have a crucial role to play in this whole process. First of all, we are contributing by identifying shared priorities across the region. We are connecting the in-country stakeholders from across the region and we bring them to the same room and facilitate the dialogue between the stakeholders. As the next step, we also help to improve trust between counterparts from neighboring countries and help them improve coordinating with each other beyond the borders of their nation states. So, of course, CDIG, as any IGF initiative, is not the space that can create the loss, but we are definitely the space that can create the opportunity where the better loss and better cooperation can be shaped and where the new initiatives with some practical value can take the beginning. I would also say that for fragmented regions like ours, usually the very fact of creating the habit of cooperation is an important first step to trusted cooperation throughout the years and I think this is what the initiatives like CDIG are doing. Also, as I mentioned, there is the imagined practice in our region of shaping the convergence between different countries and I think this is important also to have this culture of different stakeholders talking to each other. Of course, during the CDIG meetings which are happening on the annual basis, we repeatedly have the sessions which are touching from different perspectives the issues of data governance and data protection and we usually get a lot of proposals on these specific topics which means that this is something which resonates with the stakeholders in the region and which is truly important to them. And also, for the upcoming meeting this year in October that we will be hosting in Athens, we also have been partnering with the Council of Europe and will be hosting a pre-event to the main meeting gathering the representatives of the media regulatory authorities from Western Balkans which is also a good example to start with some more trusted conversation where they feel more comfortable to share the challenges that they are experiencing on the daily basis but then, of course, they will join the main meeting and will talk to other stakeholders and there will be also the panel hosted so that this can truly shift to the multi-stakeholder conversation. So, I would say this is probably not the solution for everything having the space like IEGF but this is obviously a good beginning where the good initiatives could start.


Wairagala Wakabi: Thank you for sharing these inputs, very insightful in regional challenges from Western Africa, from Southeastern Europe. The examples illustrate the importance of the work that regional organizations are doing in facilitating data governance among states. We have looked at national and regional perspectives on data governance and would like to get into the conversation. We will begin with Meri Sheroyan. The digital code recently adopted in Kyrgyzstan aims to create a favorable environment for digital services and data processing. From your perspective, how has the national approach to data governance evolved over the recent years and what opportunities and challenges does civil society have when engaging in data policy and implementation processes?


Tattugal Mambetalieva: Thank you. At the regional level, Kyrgyzstan is the first to use an integration gateway for secure and transparent data exchange between the state bodies and the business. This innovative approach is part of Kyrgyzstan’s recent digital code which set a standard for data handling, focusing on legality, minimization of data collection, accuracy and integrity to build a better digital environment. Kyrgyzstan doesn’t use centralization and localization of data. Centralization of data has risks for data protection and localization of data creates additional burden to business. This approach differs from many neighboring countries like Kazakhstan and Uzbekistan where data centralization and localization of data is used. Therefore, challenges for civil society, risks on data protection and ethical use still remain. Very well.


Wairagala Wakabi: Thank you for that. As a follow-up, what opportunities do you see for civil society to bridge regional and global data governance efforts? Thank you.


Tattugal Mambetalieva: For Central Asia countries, Central Asia countries Data exchanges are economically interdependent, making data exchange crucial for interaction. However, cross-border data exchange raises concerns about ensuring adequate data security. Civil society must primarily monitor the arrangement of data exchange to ensure countries guarantee transparency, accountability and inclusivity.


Wairagala Wakabi: Thank you so much. I will now move to Meri quickly. Armenia is navigating digital transformation. Coming from the non-government sector, why is it important to bring civic tech voices into public processes and what role are they playing today in advancing robust data frameworks?


Meri Sheroyan: Thank you very much for the question. You are completely right. Armenia is going toward digital transformation and has made notable progress in recent years by launching e-governance platforms, digitizing public services, initiating important data governance projects. Currently, the country is working on building both legal and technical frameworks that need to support these transformations. These frameworks aim to define how public information is accessed, to set the standards for data collection and processing, as well as to regulate the use and management of databases. But from my perspective, I think that these efforts not only depend on technological advancements or standards or rules or protocols, but also on inclusive and participatory governance. That’s why I think that civic tech voices into public policy processes are essential. Armenia builds trust in public institutions. It needs the insights and oversight of actors. that actually serves as a bridge between citizens and public institutions. And civic tech organizations such as non-profits, such as watchdog groups or data advocates or digital right defenders play a crucial role in the process. And I think our involvement does not only include just monitoring digital projects but also to flag the ethical concerns, to identify the data misuse or to address barriers of the excess of data. And in areas like practically in procurement, in budget or beneficiary transparency, beneficiary ownership platforms that Armenia has, these are the transparency tools that have shown the greatest impact when they are complemented by the engagement and oversight of the public. I can say just for our organizational perspective and experience, we’re just not doing the monitoring but we go beyond simply evaluating an impact and we do outreach projects, we do education for citizens so they can understand how their data is used, why digital systems matter and how government platforms can improve public services for everyone. So in short, the civic tech voices are not just contributors but are essential partners to build digital systems that are ethical, that are inclusive and also serve for the public.


Wairagala Wakabi: Thank you very much. Could we briefly also maybe look at some of the capacity gaps that organizations you work with face in leveraging data for sector initiatives?


Meri Sheroyan: For someone who worked many years in public sector, then in international organization and now serving from civil society, I see maybe the issues more crystal clear and maybe I can state one issue that is important, has the importance. I think the lack of clear data strategy maybe is the main challenge and I think that without a unified vision of a roadmap on how data supports the missions, the efforts somehow become fragmented in public institutions. So the weak data governance I think often results to unclear ownership and inconsistent data quality controls. So as we run out of the time, I’ll just be short for this question.


Wairagala Wakabi: We have time. No worries. So thanks everybody. A lot of insights. Before we go to the public to give us some comments and questions, we would like for each participant to use just one minute to give something actionable. Considering the many common challenges that we’ve discussed, what practical steps can your regions take in the next 12 months to strengthen inter-regional, international cooperation on data governance, especially around areas like standard setting, data interoperability and oversight mechanisms? We’ll take this, I think, the same way we went, beginning with the Minister and then the Commissioner and then Olga.


Dr. Ismaila Ceesay: Well, thank you very much. I think one of the practical steps that we can consider is to establish a continental data governance framework so that we can finalize and promote adoption of the EU data policy framework across all member states. This will create a shared baseline for data protection, cross-border data flows, but also interoperability across the continent. Another thing we can also consider is to harmonize national data protection laws across the continent so we can encourage countries to align with continental standards like the Malabu Convention but also internationally with the GDPR style protections. This will reduce fragmentation but also promote easier cross-border collaboration and trust in African data systems. Another thing to consider is to engage in global standard setting bodies to increase African representation in ISO, IEEE, UN bodies, for example, the ITU and others. This will ensure Africa’s interest and realities are reflected in global data standards and regulatory frameworks. And then perhaps we can also consider building regional oversight and coordination mechanisms to create or empower sub-regional data governance hubs. This will help us oversee policy compliance, technical cooperation, joint investigations on cross-border data breaches, but also encourage shared accountability and mutual learning. Thank you.


Wairagala Wakabi: Thank you, Minister. Commissioner?


Milan Marinovic: Thank you. In the next 12 months, in order to strengthen regional and international cooperation in the field of data governance, in our region of the Western Balkans, we plan to hold multilateral and bilateral meetings with DPAs from the region and with relevant representatives of executive authorities, IT companies and other companies. As a good example of those multilateral meetings, I can tell that there is an initiative from 2017 on the initiative of Slovenia, which gathers all DPAs from former Yugoslavia. And it is a very interesting combination because we have two member states of the EU, Slovenia and Croatia, and four which are not members of the EU as Bosnia and Herzegovina, Montenegro, North Macedonia and Serbia. But from these four, Serbia and recently Bosnia and Herzegovina has a law of personal data protection which is complied with the GDPR and police directive of the EU. Montenegro and North Macedonia has not yet. So it is one particular meeting. And the second is the meeting of data protection authorities of Bosnia and Herzegovina, Montenegro and Serbia on our initiative, how to solve the problem which we have with the META and X according to changing of their private data.


Olga Kyryliuk: I think my job is now much easier, responding to this question after commissioning, because I don’t actually need to reinvent the wheel. I would just align with the idea of having inter-regional dialogue on cross-border data sharing between the data protection authorities. What I can offer from my side, as long as we are going to host our annual meeting in October, which is still pretty much time until that moment, I can suggest to have some side meeting or run the session with the DPAs during the CDIG meeting, so that we can also bring this conversation to the regional community. This will be another step in developing this idea and making sure that what we have mentioned over here is not just staying at the level of ideas, but we actually take the follow-up action on what we are discussing here. I also think that one of the things that could be done is some kind of mapping of the regulatory bottlenecks in cross-border data sharing. This can show us what are still the challenges in terms of regulatory frameworks, infrastructure and interoperability. Then, from there, different DPAs in different regions could take those findings and recommendations to ensure further alignment through bilateral and multilateral meetings.


Wairagala Wakabi: Thank you so much. Olga, we’ll go to Folake.


Folake Olagunju: Thank you very much. I’m going to piggyback on the Honourable Minister from Gambia’s words. He’s already spoken about harmonisation and alignment and all that. If that is taking place in the Gambia, by default, hopefully it means it will have moved to Senegal and then hopefully moved to Sierra Leone. The three countries have done all the harmonization the Honorable Minister was talking about. What I would like to see is a setup of a controlled test environment where we can actually get all these member states, the public agencies of member states, to actually trial an interoperable platform. Now, if we’re able to do this for certain sectors, such as health, education, identity systems, and it works, we will be able to then take those lessons and scale up to a regional event. Thank you.


Wairagala Wakabi: Excellent. We’ll now… Okay. Good.


Tattugal Mambetalieva: First of all, I support all proposals, and currently at all international platforms we’re advising an initiative to create an intergovernmental agreement on data exchange among Central Asia countries, open for other countries to join. And this is because data is the new oil, and issues of access are crucial, not only within a country, but also at the regional level.


Wairagala Wakabi: Thank you so much. And finally, Meri?


Meri Sheroyan: For Armenia, what I can say is that the country uses international experience in incorporating many initiatives, for instance, in interoperability, like using X-Road, like an Estonian model. And I think that many practical exercises should be done. So it could be like piloting small-scale data-sharing initiatives to understand whether the cross-border public service delivery works or not. And it could be in different areas, and starting with, like, consular or migration or environmental areas. So this would lead to understand…


Wairagala Wakabi: Thank you so much panelists for those great ideas on joint initiatives on what is relevant work on in the future. Would like now to invite any comments or questions and if whoever has any question or comment please, there is a mic over there, please get there and shoot your question or comment. We have one or two videos, going to ask questions, any others please go ahead and ask. You may mention your name and where you come from. If you want a particular individual to answer the question, you may also direct it to them.


Audience: Thank you all and very excellent panelists. And all the points actually you raised is, I mean, to the critical in the points of the data governance, cross-border governance. So there’s one new protocol and a new framework about, it’s called SOLID and social linked data actually can help to address, can help address all the issues and related to the cross-border governance. So currently, because all the panelists actually from, you know, the emerging countries and emerging countries currently also need the language to be supported by large language models and all the language and the culture can be preserved. and the other is that there is a question about the language data sets. Can you tell us how you observed if the language data sets can be owned by your own country but also can be cross-border and with solid protocol and the LingoAI? So LingoAI is working on the whole solution and can address the issues you raised. Actually the proposal was invented by the founding father of the World Wide Web called Sir Tim Berners-Lee and he joined IGF three times. So I would like to know the deployment or awareness of your countries and to the new protocol of our next generation web. And this protocol was invented to take care of the data control and the data ownership and data sovereignty and cross-border issues. And I’m not sure whether your nation, your country or region have adopted or have the awareness of a solid protocol. Thank you.


Wairagala Wakabi: Thank you so much. The question we have received, anybody is welcome to respond to it. While speaking about not only the element of awareness of the protocol but what kind of initiatives are underway in order to promote data ownership but also encourage cross-data flows. Who is willing to give us a comment? Yes, Commissioner.


Milan Marinovic: As I know, Serbia didn’t adopt that protocol yet. But when I heard how good is the protocol for data protection, I’m sure that Serbia will adopt soon.


Wairagala Wakabi: Thank you. Excellent. Other responses?


Meri Sheroyan: Maybe I can add something. I think that Armenia or any other country localize sensitive data such as biometric information or health records. And also for Armenian cases, I know that government working on distinguish between the sensitive data and less sensitive data. And I think that having different kind of protocols or standards internationally recognized could also impact on the cross-border data sharing. But first of all, for countries that are in a process of implementation and adoption of data governance frameworks, first of all, need to distinguish between sensitive and less sensitive data. And then move forward on adopting international standardization. And I am hopeful that countries like Armenia that are landlocked or emerged will step forward to this initiative to make it possible the cross-border public service delivery across country and out of the country.


Audience: Singapore Internet Governance Forum. So I’m the coordinator and the co-founder for SGIGF. And SGIGF like to work with every countries and the representatives and to help to promote and the solid protocol and the lingual AI to help actually protect the data. And the culture for all the emerging countries. Okay, thank you.


Wairagala Wakabi: Thank you so much. Useful contextual information. We know where you’re coming from. And I think many of us will be willing to reach out to you. The minister has a response to that as well.


Dr. Ismaila Ceesay: I think the issue with language is a bit complex because Africa has over 2,000 languages. Some countries have 56 languages. Some have 200 languages. So for us, just like Serbia, we haven’t really considered this yet. As a small country, 2.5 million people, we have almost 11 to 12 different languages, which are totally different. So how we really harmonize this with the language, with 2,000 languages, it’s difficult. Yes, I mean, because of the colonial history, we have French Africa, we have Spanish Africa, Portuguese Africa, English Africa. Perhaps this is something we can consider. Using those languages. But not our indigenous languages.


Audience: Yes. Lingua AI is actually designed for the indigenous languages. Because, you know, when AI becomes popular, becomes a commodity, and everyone currently in emerging countries, almost, is using it. All use English as a language and to prompt and to get the, you know, generative AI result. And gradually, the indigenous language will be forget, and especially the culture build on the languages. So, for larger companies, if they want to support indigenous languages, and they are going to collect the data, in this centralized way, the data will be owned by the centralized company. And after the fine-tuning the large network model, the data will be continuously collected to the larger companies. So, the data will run out of your countries, and your people and the country don’t own the data. This is called digital colonization. So, the new protocol and the solid and the lingual AI is helping to anti, you know, this kind of a digital colonization.


Wairagala Wakabi: Thank you so much for that clarification. Okay, thank you very much. Data colonization and data sovereignty are key issues in our conversation from where many of us come from. So, it’s good to know there is something that is addressing that. We will reach out to you, but we do have another comment. Thank you, sir.


Audience: Hi, good afternoon. I think my question might be a little premature looking at the landscape in our country, but I will go ahead and ask anyway. Where there is no IGF, no local IOS, where do you suggest this conversation starts in terms of thinking about regulations and guidance and protocol for cross-border data protection? Should it start with the regulator for the sector? Should it be emanating from civil society? Suggestions, I’m open to hear. Some quick guidelines in the two seconds we probably have. Thank you.


Wairagala Wakabi: Would you mind telling us where you’re from?


Audience: The Bahamas.


Wairagala Wakabi: Lucky you. But we have IGF and you don’t, so, you know. All right, we’ll begin with Olga. She has a response.


Olga Kyryliuk: I think it’s not really the problem that you don’t have yet a dedicated space because the dialogue can be created just from the desire to have the conversation. And very often you can have a much more open and trusted dialogue once you talk to stakeholders who are actually having the decision-making and policy-making power. Sometimes even having the decision-making power, but sometimes they might not have the full awareness or might not be that much in full capacity to execute and enforce. And sometimes just some small support and push from outside might be the beginning of a good positive change inside the country. So, I would say if you want something specifically from the DPA, go to DPA. If you want from someone else, go to them. Start maybe from bilateral one-to-one meetings. And once they feel more comfortable to talk to other stakeholders, then you can extend this dialogue.


Wairagala Wakabi: Thank you very much. Other panelists? Yes, please.


Milan Marinovic: Only a few words. It must be multilateral, not bilateral. So, when I said multilateral, it means data protection authorities, stakeholders, executive bodies, all, and civil society. Thank you very much.


Wairagala Wakabi: We do have another question. Please go ahead.


Audience: Thank you. Good afternoon. I’m Joseph. I’m here for the Wikimedia Foundation. I was very interested in the Serbian Commissioner’s comment about privacy as a human right, which, of course, we completely agree that it is. But, of course, there are many other human rights, the right to freedom of information and expression. And I’d like to ask the entire panel very broadly how, through this process of harmonizing regional data protection laws and implementing such new laws, how we can ensure that all human rights are respected throughout this process and that the right to privacy does not come at the expense of any other potential right.


Wairagala Wakabi: So, we are going, thanks for that question, what we are going to do is that we are going to couple it up with another related questions. Namely, in many countries there is a diversity of legal systems and institutional maturity is different. How can we move also towards mutual recognition of data protection frameworks without undermining national data sovereignty? I would like you to reflect on that for one minute, even as we all answer the question from the participant from Wikipedia. We have one and a half minutes. Tie in your last word as well, please. We will go… This time, let’s start from my left. Then, you know, move on.


Meri Sheroyan: Okay, maybe I can start. I think there is a blurred line between protecting digital rights and the expression of freedom of information and sometimes government need to deal with that, not to ban the freedom of information while also considering how to protect them and how to protect their rights in Internet because in recent years the Internet gives us a broad mass of information which can lead to fake news, which can lead to disinformation and for the government it’s important to underline this line and protect their rights but also not to violate the freedom of information. And concerning the question, I think there should be formal cooperation channels between different countries for data protection agencies in different countries so that they can set clear protocols for audits, incident responses or enforcement coordination, etc. So my perspective is that these formal cooperation channels could lead to the national digital sovereignty and to implement data protection frameworks.


Wairagala Wakabi: Thanks, Mary. Satya, the same for you.


Tattugal Mambetalieva: Continue our previous discussion about synchronization and harmonization of approaches between countries are crucial. We need to create an environment of trust and organize a transparent data exchange making it clear who is using the data and for what purpose, I think.


Wairagala Wakabi: Thank you so much. We’ll go to Olga and then the Commissioner.


Olga Kyryliuk: So as a lawyer I don’t see the mutual recognition of data protection frameworks as a threat to national sovereignty but it’s rather an issue of legal interoperability. So we often don’t need to create the identical laws but what we really need is to create the trustworthy equivalence and to create the trust which is cross-border trust so that whenever the data is shared there are also some safeguards in place and responsibility which comes for the breach of mishandling of data. But also I would say that it’s important to ensure that there is transparent oversight and independent enforcement whenever it comes to handling the personal data. So once that is in place it’s just a matter of… for Dialogue and Trust between the Borders and between the Nation-States.


Wairagala Wakabi: Thanks, Olga.


Milan Marinovic: When we speak about sovereignty and data protection authority, it is possible because any law which is based as a law in Serbia on GDPR and police directive have exceptions of the principles. So, if there is a national security in question, you have exceptions of ordinary data protection authority. We have two models of data protection authorities, ordinary and specially which made the bodies in that situation like organized crime, national security, etc. And according to the question of the Wikimedia, I must say something. Exist states in Europe and in the world which have two-in-one system, two bodies which protect two rights, two human rights, personal data protection and free access to information of public importance. It is a situation in Serbia. So, I think that it is a good situation because you can measure in any particular case what is stronger, personal data protection or right to know of the public.


Wairagala Wakabi: Thank you, Commissioner. We’ll go to Folake for one minute and then we’ll end with the Minister.


Folake Olagunju: Thank you very much. Obviously, we all agree that building trust is required around data. For me, I know sovereignty matters. Thank you so much. And we’ll end with Amin.


Dr. Ismaila Ceesay: Yes, I think we were able to solve the problem by we currently have the access to information commission, which has been operationalized. And once we pass the data protection law by the end of this year, we are going to merge these two commissions. So they can be able to fulfill that role of balancing each other like the commissioner from Sabia has said. So that we are going to have one commission responsible for access to information, but also oversight over data protection. And my final words would be, three words will summarize what I’ve been saying. That is harmonization, harmonization, harmonization. We need to harmonize legal and regulatory frameworks and legally binding EU-wide data governance charter, aligned with the Malabu Convention, but also with the GDPR principles and Global Digital Compact. And finally, we need to create uniform standards for consent, privacy, cross-border flows and AI ethics. Thank you.


Wairagala Wakabi: Thank you, Dr. Cisse. Thank you, Commissioner Marinović. Thank you, Meri, Folake, Olga and Tato. Ladies and gentlemen, please join me.


D

Dr. Ismaila Ceesay

Speech speed

135 words per minute

Speech length

1602 words

Speech time

709 seconds

The Gambia prioritizes institutional capacity building, legal reforms, and whole-of-government approach with data protection legislation currently in parliament

Explanation

The Gambia is developing a comprehensive national digital governance framework with support from UNDESA, focusing on building institutional capacity and implementing legal reforms. The country has formulated the Data Protection and Privacy Bill 2023 which is currently before the National Assembly and provides a robust legal framework covering various aspects of data protection.


Evidence

Data Protection and Privacy Bill 2023 currently in parliament, National Data Protection and Privacy Policy of 2019, National Strategy for the Development of Statistics with 2025 Statistics Act revision, national data policy supported by GIZ and UNDESA


Major discussion point

National Data Governance Frameworks and Strategies


Topics

Legal and regulatory | Development


Agreed with

– Tattugal Mambetalieva
– Meri Sheroyan

Agreed on

Capacity building and institutional development are critical priorities


The Gambia faces challenges including capacity gaps, fragmentation, and digital divide inequities across rural populations

Explanation

Despite progress in data governance, The Gambia encounters persistent challenges in implementing effective frameworks. Many government ministries and agencies lack technical capabilities, the national data ecosystem remains siloed with inconsistent standards, and there are significant inequities in digital access particularly affecting rural and underserved populations.


Evidence

Many ministries, departments, and agencies lack technical and analytical capabilities; national data ecosystem remains siloed with inconsistent standards; inequities in digital access and literacy particularly across rural and undeserved populations


Major discussion point

National Data Governance Frameworks and Strategies


Topics

Development | Legal and regulatory


The Gambia aligns with African Union data policy framework and ECOWAS Supplementary Act on Personal Data Protection

Explanation

The Gambia’s data governance reforms are closely aligned with continental and regional frameworks to ensure harmonization and facilitate cross-border cooperation. The country is actively engaged in ECOWAS initiatives and follows African Union guidelines while also considering alignment with EU standards for broader international cooperation.


Evidence

African Union’s data policy framework emphasizing data sovereignty and cross-border data flows, ECOWAS Supplementary Act on Personal Data Protection expected to be endorsed by heads of state, alignment with EU data policy framework


Major discussion point

Regional Harmonization and Cross-Border Data Flows


Topics

Legal and regulatory | Development


Agreed with

– Folake Olagunju
– Olga Kyryliuk

Agreed on

Need for harmonization of data governance frameworks across regions


Ministry of Information plays critical role in fostering public trust through digital literacy, transparency, and stakeholder dialogue

Explanation

While the Ministry of Digital Economy handles technical aspects, the Ministry of Information focuses on building public trust and civic engagement in data governance. This includes sensitizing citizens about their data rights, promoting transparency through proactive disclosure of government data, and facilitating dialogue between government, civil society, and the public.


Evidence

Public awareness and digital literacy activities, transparency and access to information as key pillar of 2025-2029 strategic plan, media engagement and narrative framing, stakeholder dialogue and inclusion


Major discussion point

Human Rights and Digital Sovereignty


Topics

Human rights | Development | Sociocultural


Need to establish continental data governance framework and increase African representation in global standard setting bodies

Explanation

As a practical step for strengthening international cooperation, there should be efforts to finalize and promote adoption of continental data policy frameworks across all African member states. Additionally, increasing African representation in global bodies like ISO, IEEE, and UN organizations will ensure Africa’s interests are reflected in global data standards.


Evidence

EU data policy framework, Malabu Convention, GDPR style protections, ISO, IEEE, UN bodies like ITU


Major discussion point

International Cooperation and Standard Setting


Topics

Legal and regulatory | Development


T

Tattugal Mambetalieva

Speech speed

80 words per minute

Speech length

272 words

Speech time

201 seconds

Kyrgyzstan adopted a digital code creating favorable environment for digital services using integration gateway for secure data exchange between state bodies and business

Explanation

Kyrgyzstan has implemented an innovative approach through its digital code that establishes standards for data handling with focus on legality, minimization, accuracy and integrity. The country uses an integration gateway system that enables secure and transparent data exchange between government bodies and businesses, setting a regional standard.


Evidence

Digital code focusing on legality, minimization of data collection, accuracy and integrity; integration gateway for secure and transparent data exchange between state bodies and business


Major discussion point

National Data Governance Frameworks and Strategies


Topics

Legal and regulatory | Economic


Kyrgyzstan avoids centralization and localization of data unlike neighboring countries, reducing risks to data protection

Explanation

Unlike Kazakhstan and Uzbekistan which use data centralization and localization approaches, Kyrgyzstan has chosen a different path that avoids these practices. This approach reduces risks for data protection and creates less additional burden for businesses, though challenges for civil society regarding data protection and ethical use still remain.


Evidence

Differs from neighboring countries like Kazakhstan and Uzbekistan where data centralization and localization is used; centralization has risks for data protection and localization creates additional burden to business


Major discussion point

Balancing Data Protection with Digital Innovation


Topics

Legal and regulatory | Human rights


Disagreed with

Disagreed on

Data localization and centralization approaches


Civil society must monitor data exchange arrangements to ensure transparency, accountability and inclusivity

Explanation

Given that Central Asian countries are economically interdependent and require data exchange for interaction, civil society has a crucial role in oversight. They must primarily monitor cross-border data exchange arrangements to ensure countries guarantee proper safeguards and maintain democratic principles in data governance.


Evidence

Central Asia countries are economically interdependent, making data exchange crucial for interaction; cross-border data exchange raises concerns about ensuring adequate data security


Major discussion point

Multi-Stakeholder Engagement and Civil Society Role


Topics

Human rights | Legal and regulatory


Agreed with

– Dr. Ismaila Ceesay
– Meri Sheroyan

Agreed on

Capacity building and institutional development are critical priorities


Central Asia countries need intergovernmental agreement on data exchange due to economic interdependence

Explanation

There is an initiative being advised at international platforms to create an intergovernmental agreement on data exchange among Central Asian countries, with openness for other countries to join. This is driven by the recognition that data is valuable like oil and access issues are crucial not only within countries but also at the regional level.


Evidence

Data is the new oil, and issues of access are crucial, not only within a country, but also at the regional level


Major discussion point

Regional Harmonization and Cross-Border Data Flows


Topics

Legal and regulatory | Economic


F

Folake Olagunju

Speech speed

169 words per minute

Speech length

1291 words

Speech time

456 seconds

ECOWAS revised the Supplementary Act on Data Protection to support cross-border data flow through harmonization rather than homogenization

Explanation

ECOWAS has revised its Supplementary Act on Data Protection with extensive stakeholder consultation across West Africa to support cross-border data flows. The approach focuses on harmonization at the regional level while avoiding homogenization, allowing for tailored solutions that respect the different nuances of each member country while maintaining regional coherence.


Evidence

Studies with different stakeholder groups across West Africa including civil society and private sector; whole of society approach rather than just whole of government; harmonisation at regional level but not homogenisation


Major discussion point

Regional Harmonization and Cross-Border Data Flows


Topics

Legal and regulatory | Development


Agreed with

– Dr. Ismaila Ceesay
– Olga Kyryliuk

Agreed on

Need for harmonization of data governance frameworks across regions


ECOWAS prioritizes inclusive engagement ensuring all member states participate from beginning to end with whole-of-society approach

Explanation

ECOWAS emphasizes evidence-based policy making through inclusive engagement that involves all member states throughout the entire process. Rather than just a whole-of-government approach, they adopt a whole-of-society perspective that includes civil society, private sector, academia, governments, and citizens, recognizing that data governance affects everyone.


Evidence

Member States are right with us from the very beginning all the way to the end; studies with different stakeholder groups across West Africa including civil society and private sector; whole of society approach because data involves every single person


Major discussion point

Multi-Stakeholder Engagement and Civil Society Role


Topics

Development | Human rights


Agreed with

– Meri Sheroyan
– Milan Marinovic

Agreed on

Multi-stakeholder engagement is crucial for effective data governance


Disagreed with

– Dr. Ismaila Ceesay

Disagreed on

Scope of stakeholder engagement approach


Countries must distinguish between sensitive and less sensitive data categories to facilitate responsible data sharing

Explanation

ECOWAS is working on defining sensitive and non-sensitive data categories for member countries to address reluctance in data sharing. When organizations are asked to share data, they are often hesitant because they don’t know which data needs to be sovereign and which can be shared, so clearer categorization will help facilitate responsible data sharing.


Evidence

When you ask someone to share data, they’re a bit reluctant because they don’t know which one needs to be, which data needs to be sovereign and which data can be shared


Major discussion point

Balancing Data Protection with Digital Innovation


Topics

Legal and regulatory | Human rights


Controlled test environments for member states to trial interoperable platforms in sectors like health and education

Explanation

As a practical step for the next 12 months, ECOWAS proposes setting up controlled test environments where member states’ public agencies can trial interoperable platforms. If successful trials in sectors such as health, education, and identity systems work, the lessons learned can be scaled up to regional implementation.


Evidence

Trial an interoperable platform for certain sectors, such as health, education, identity systems; if it works, take those lessons and scale up to a regional event


Major discussion point

International Cooperation and Standard Setting


Topics

Infrastructure | Development


O

Olga Kyryliuk

Speech speed

137 words per minute

Speech length

1337 words

Speech time

585 seconds

Southeastern Europe faces regulatory divide between EU member states operating under GDPR and non-EU countries still seeking compliance

Explanation

The Southeastern European region is characterized by a regulatory divide where some countries like Croatia operate under EU frameworks such as GDPR, while others like North Macedonia are still working toward full institutional and legal compliance. This creates challenges for cross-border trust and data sharing, as non-EU countries are often still considered third countries despite having laws that closely mirror EU standards.


Evidence

Countries operating under EU regulatory framework such as GDPR (Croatia) vs countries still in process of securing full compliance (North Macedonia); non-EU countries considered as third countries in terms of data protection guarantees


Major discussion point

Regional Harmonization and Cross-Border Data Flows


Topics

Legal and regulatory | Human rights


IGFs like CDIG contribute by identifying shared priorities and facilitating dialogue between stakeholders across regions

Explanation

Internet Governance Forums, particularly CDIG (Central and Eastern European Dialogue on Internet Governance), play a crucial role in harmonizing data governance frameworks by connecting stakeholders from across the region and facilitating dialogue. While IGFs cannot create laws, they create opportunities for better cooperation and help improve trust between counterparts from neighboring countries.


Evidence

Connecting in-country stakeholders from across the region and bringing them to the same room; help improve trust between counterparts from neighboring countries; create opportunity where better cooperation can be shaped


Major discussion point

Multi-Stakeholder Engagement and Civil Society Role


Topics

Legal and regulatory | Development


Mutual recognition of data protection frameworks is about legal interoperability rather than threat to national sovereignty

Explanation

The mutual recognition of data protection frameworks should be viewed as a matter of legal interoperability rather than a threat to national sovereignty. What is needed is trustworthy equivalence and cross-border trust with safeguards and responsibility for data breaches, along with transparent oversight and independent enforcement for personal data handling.


Evidence

Don’t need to create identical laws but need to create trustworthy equivalence and cross-border trust; transparent oversight and independent enforcement whenever it comes to handling personal data


Major discussion point

Human Rights and Digital Sovereignty


Topics

Legal and regulatory | Human rights


Agreed with

– Dr. Ismaila Ceesay
– Folake Olagunju

Agreed on

Need for harmonization of data governance frameworks across regions


M

Meri Sheroyan

Speech speed

120 words per minute

Speech length

828 words

Speech time

410 seconds

Armenia is building legal and technical frameworks for digital transformation including e-governance platforms and data governance projects

Explanation

Armenia has made notable progress in digital transformation by launching e-governance platforms, digitizing public services, and initiating important data governance projects. The country is currently working on building both legal and technical frameworks that define how public information is accessed, set standards for data collection and processing, and regulate database use and management.


Evidence

Launching e-governance platforms, digitizing public services, initiating important data governance projects; frameworks aim to define how public information is accessed, set standards for data collection and processing, regulate use and management of databases


Major discussion point

National Data Governance Frameworks and Strategies


Topics

Legal and regulatory | Development


Agreed with

– Milan Marinovic
– Audience

Agreed on

Balancing data protection with innovation and other rights is a fundamental challenge


Civic tech voices are essential partners building trust in public institutions and serving as bridge between citizens and government

Explanation

Civic tech organizations such as non-profits, watchdog groups, data advocates, and digital rights defenders play a crucial role in Armenia’s digital transformation by serving as bridges between citizens and public institutions. Their involvement goes beyond monitoring to include flagging ethical concerns, identifying data misuse, addressing access barriers, and educating citizens about data use and digital systems.


Evidence

Non-profits, watchdog groups, data advocates, digital right defenders; involvement includes monitoring, flagging ethical concerns, identifying data misuse, addressing barriers of access; outreach projects and education for citizens


Major discussion point

Multi-Stakeholder Engagement and Civil Society Role


Topics

Human rights | Development | Sociocultural


Agreed with

– Dr. Ismaila Ceesay
– Tattugal Mambetalieva

Agreed on

Capacity building and institutional development are critical priorities


Piloting small-scale data-sharing initiatives for cross-border public service delivery in consular, migration, or environmental areas

Explanation

Armenia incorporates international experience in initiatives like interoperability, using models such as Estonia’s X-Road system. As a practical step forward, the country should pilot small-scale data-sharing initiatives to test whether cross-border public service delivery works effectively in areas such as consular services, migration, or environmental management.


Evidence

Using X-Road, like an Estonian model; piloting small-scale data-sharing initiatives in consular or migration or environmental areas


Major discussion point

International Cooperation and Standard Setting


Topics

Infrastructure | Legal and regulatory


M

Milan Marinovic

Speech speed

111 words per minute

Speech length

1037 words

Speech time

556 seconds

Parallel balanced development of digitalization and personal data protection systems is essential, as they cannot exist without each other

Explanation

The accelerated development of digitalization in all areas of life must be accompanied by the development of personal data protection systems. Just as natural opposites like day and night or summer and winter cannot exist without each other, the processing of personal data cannot exist without its protection, creating a strong interdependent link between processing and protection.


Evidence

Just as a day cannot exist without night, summer without winter, so the processing of personal data cannot exist without its protection; digitalization and AI feed and depend on data


Major discussion point

Balancing Data Protection with Digital Innovation


Topics

Human rights | Legal and regulatory


Agreed with

– Meri Sheroyan
– Audience

Agreed on

Balancing data protection with innovation and other rights is a fundamental challenge


Protection of personal data is one of the most threatened fundamental human rights in the era of rapid technological development and AI

Explanation

In today’s era of rapid development of modern technologies, widespread digitalization, and enormous use of artificial intelligence, the protection of personal data and the right to privacy in general have become among the most threatened fundamental human rights. This makes it extremely difficult but not impossible to find the appropriate balance between digital systems and data protection.


Evidence

Era of rapid development of modern technologies, widespread digitalization and enormous use of artificial intelligence; extremely difficult to find appropriate balance between digital and data systems and protection of personal data


Major discussion point

Balancing Data Protection with Digital Innovation


Topics

Human rights | Legal and regulatory


Proposed E-association of DPAs worldwide to enable exchange of practices and mutual legal assistance in simple online format

Explanation

The proposal is to form a global association of Data Protection Authorities (DPAs) in an online format that would allow all regulators, regardless of their status or country, to exchange practices in personal data protection, share experiences, provide mutual legal assistance, and solve common problems efficiently on bilateral and multilateral levels.


Evidence

All regulators regardless of status have opportunity to exchange practices, provide mutual legal assistance and solve common problems in simple, easy and efficient way; plan to send email to all DPAs worldwide to explain the idea


Major discussion point

International Cooperation and Standard Setting


Topics

Legal and regulatory | Human rights


Agreed with

– Folake Olagunju
– Meri Sheroyan

Agreed on

Multi-stakeholder engagement is crucial for effective data governance


Two-in-one system protecting both personal data and free access to public information allows balancing competing rights

Explanation

Some states in Europe and worldwide have a two-in-one system where two bodies protect two different human rights: personal data protection and free access to information of public importance. This system, as implemented in Serbia, allows for measuring in any particular case which right is stronger – personal data protection or the public’s right to know.


Evidence

Serbia has two-in-one system with bodies protecting personal data protection and free access to information of public importance; can measure in any particular case what is stronger, personal data protection or right to know of the public


Major discussion point

Human Rights and Digital Sovereignty


Topics

Human rights | Legal and regulatory


W

Wairagala Wakabi

Speech speed

121 words per minute

Speech length

1998 words

Speech time

984 seconds

Data governance is crucial for building digital cooperation and requires inter-regional dialogue among policymakers and civil society

Explanation

The session aims to contribute to inter-regional dialogue among policymakers and civil society leaders from West Africa, Eastern Partnership, and Western Balkans to leverage common knowledge on data governance. This approach recognizes that effective data governance requires collaboration across regions and stakeholder groups.


Evidence

Session bringing together speakers from various regions to discuss data governance in line with IGF sub-theme of building digital cooperation


Major discussion point

International Cooperation and Standard Setting


Topics

Development | Legal and regulatory


Domestic and cross-border data governance are both essential for responsible, future-ready and rights-based global frameworks

Explanation

Effective governance of data both domestically and across borders is crucial for accelerating responsible, future-ready and rights-based data governance globally. This requires exploring common challenges and valuable experiences from different regional contexts.


Evidence

Need to explore common challenges and valuable experiences from different regions to accelerate responsible, future-ready and rights-based data governance globally


Major discussion point

Regional Harmonization and Cross-Border Data Flows


Topics

Human rights | Legal and regulatory


A

Audience

Speech speed

116 words per minute

Speech length

667 words

Speech time

343 seconds

SOLID protocol and LingoAI can address cross-border data governance issues while preserving indigenous languages and preventing digital colonization

Explanation

The SOLID protocol, invented by Tim Berners-Lee, is designed to address data control, ownership, sovereignty and cross-border issues. LingoAI specifically supports indigenous languages to prevent digital colonization where larger companies collect language data centrally, causing countries to lose ownership of their cultural and linguistic data.


Evidence

SOLID protocol invented by founding father of World Wide Web Tim Berners-Lee; LingoAI designed for indigenous languages to prevent digital colonization where data runs out of countries to larger companies


Major discussion point

Human Rights and Digital Sovereignty


Topics

Human rights | Sociocultural | Legal and regulatory


Multi-stakeholder dialogue should start with direct engagement of decision-makers even without formal IGF structures

Explanation

In countries without established IGF or local governance structures, conversations about data protection regulations should begin with direct bilateral engagement with stakeholders who have decision-making power. The dialogue can start from the desire to have conversations and gradually expand to include more stakeholders once trust is built.


Evidence

Question from The Bahamas about where to start conversations in absence of IGF or local governance structures


Major discussion point

Multi-Stakeholder Engagement and Civil Society Role


Topics

Development | Legal and regulatory


Human rights must be balanced in data protection implementation to ensure privacy doesn’t come at expense of freedom of information and expression

Explanation

While privacy is a fundamental human right, the implementation of data protection laws and harmonization of regional frameworks must ensure that all human rights are respected. The right to privacy should not come at the expense of other rights such as freedom of information and expression.


Evidence

Question from Wikimedia Foundation about ensuring all human rights are respected throughout harmonization process


Major discussion point

Human Rights and Digital Sovereignty


Topics

Human rights | Legal and regulatory


Agreed with

– Milan Marinovic
– Meri Sheroyan

Agreed on

Balancing data protection with innovation and other rights is a fundamental challenge


Agreements

Agreement points

Need for harmonization of data governance frameworks across regions

Speakers

– Dr. Ismaila Ceesay
– Folake Olagunju
– Olga Kyryliuk

Arguments

The Gambia aligns with African Union data policy framework and ECOWAS Supplementary Act on Personal Data Protection


ECOWAS revised the Supplementary Act on Data Protection to support cross-border data flow through harmonization rather than homogenization


Mutual recognition of data protection frameworks is about legal interoperability rather than threat to national sovereignty


Summary

All speakers agree that regional harmonization of data governance frameworks is essential, but emphasize that harmonization should not mean homogenization – allowing for local adaptations while maintaining interoperability


Topics

Legal and regulatory | Development


Multi-stakeholder engagement is crucial for effective data governance

Speakers

– Folake Olagunju
– Meri Sheroyan
– Milan Marinovic

Arguments

ECOWAS prioritizes inclusive engagement ensuring all member states participate from beginning to end with whole-of-society approach


Civic tech voices are essential partners building trust in public institutions and serving as bridge between citizens and government


Proposed E-association of DPAs worldwide to enable exchange of practices and mutual legal assistance in simple online format


Summary

Speakers consistently emphasize that effective data governance requires involvement of all stakeholders including government, civil society, private sector, academia, and citizens rather than top-down approaches


Topics

Development | Human rights | Legal and regulatory


Balancing data protection with innovation and other rights is a fundamental challenge

Speakers

– Milan Marinovic
– Meri Sheroyan
– Audience

Arguments

Parallel balanced development of digitalization and personal data protection systems is essential, as they cannot exist without each other


Armenia is building legal and technical frameworks for digital transformation including e-governance platforms and data governance projects


Human rights must be balanced in data protection implementation to ensure privacy doesn’t come at expense of freedom of information and expression


Summary

There is consensus that data protection cannot be implemented in isolation but must be balanced with digital innovation, economic development, and other fundamental rights like freedom of expression


Topics

Human rights | Legal and regulatory


Capacity building and institutional development are critical priorities

Speakers

– Dr. Ismaila Ceesay
– Tattugal Mambetalieva
– Meri Sheroyan

Arguments

The Gambia prioritizes institutional capacity building, legal reforms, and whole-of-government approach with data protection legislation currently in parliament


Civil society must monitor data exchange arrangements to ensure transparency, accountability and inclusivity


Civic tech voices are essential partners building trust in public institutions and serving as bridge between citizens and government


Summary

All speakers recognize that effective data governance requires significant investment in building institutional capacity, technical capabilities, and oversight mechanisms


Topics

Development | Legal and regulatory


Similar viewpoints

Both speakers advocate for institutional approaches that balance data protection with transparency and access to information, with dedicated bodies handling both responsibilities

Speakers

– Dr. Ismaila Ceesay
– Milan Marinovic

Arguments

Ministry of Information plays critical role in fostering public trust through digital literacy, transparency, and stakeholder dialogue


Two-in-one system protecting both personal data and free access to public information allows balancing competing rights


Topics

Human rights | Legal and regulatory


Both emphasize the need for practical, incremental approaches to data sharing that start with clear categorization and small-scale pilots before scaling up

Speakers

– Folake Olagunju
– Meri Sheroyan

Arguments

Countries must distinguish between sensitive and less sensitive data categories to facilitate responsible data sharing


Piloting small-scale data-sharing initiatives for cross-border public service delivery in consular, migration, or environmental areas


Topics

Legal and regulatory | Infrastructure


Both speakers highlight how their regions face challenges from regulatory fragmentation and different approaches to data governance among neighboring countries

Speakers

– Tattugal Mambetalieva
– Olga Kyryliuk

Arguments

Kyrgyzstan avoids centralization and localization of data unlike neighboring countries, reducing risks to data protection


Southeastern Europe faces regulatory divide between EU member states operating under GDPR and non-EU countries still seeking compliance


Topics

Legal and regulatory | Human rights


Unexpected consensus

Digital colonization and indigenous language preservation

Speakers

– Audience
– Dr. Ismaila Ceesay

Arguments

SOLID protocol and LingoAI can address cross-border data governance issues while preserving indigenous languages and preventing digital colonization


Need to establish continental data governance framework and increase African representation in global standard setting bodies


Explanation

There was unexpected alignment between the audience member’s technical solution (SOLID protocol) and the Minister’s call for African representation in global standards, both addressing concerns about digital sovereignty and preventing external control over local data and cultural assets


Topics

Human rights | Sociocultural | Legal and regulatory


Practical implementation through pilot projects and controlled environments

Speakers

– Folake Olagunju
– Meri Sheroyan
– Olga Kyryliuk

Arguments

Controlled test environments for member states to trial interoperable platforms in sectors like health and education


Piloting small-scale data-sharing initiatives for cross-border public service delivery in consular, migration, or environmental areas


IGFs like CDIG contribute by identifying shared priorities and facilitating dialogue between stakeholders across regions


Explanation

Unexpectedly, speakers from different regions converged on the same practical approach of starting with small-scale pilots and controlled environments rather than attempting large-scale implementations immediately


Topics

Infrastructure | Development | Legal and regulatory


Overall assessment

Summary

The discussion revealed strong consensus on fundamental principles of data governance including the need for harmonization (not homogenization), multi-stakeholder engagement, capacity building, and balancing protection with innovation. Speakers consistently emphasized practical, incremental approaches over ambitious large-scale implementations.


Consensus level

High level of consensus on principles and approaches, with speakers from different regions facing similar challenges and converging on similar solutions. This suggests that despite different regulatory environments, there are universal principles and practical approaches that can guide effective data governance across regions. The consensus provides a strong foundation for inter-regional cooperation and knowledge sharing.


Differences

Different viewpoints

Data localization and centralization approaches

Speakers

– Tattugal Mambetalieva

Arguments

Kyrgyzstan avoids centralization and localization of data unlike neighboring countries, reducing risks to data protection


Summary

Kyrgyzstan explicitly chose not to use data centralization and localization approaches, differing from neighboring countries like Kazakhstan and Uzbekistan. This represents a fundamental disagreement on data governance strategy within the Central Asian region.


Topics

Legal and regulatory | Human rights


Scope of stakeholder engagement approach

Speakers

– Dr. Ismaila Ceesay
– Folake Olagunju

Arguments

The Gambia is spearheading cross-sectoral coordination to ensure that data governance is embedded across ministries, departments, and agencies


ECOWAS prioritizes inclusive engagement ensuring all member states participate from beginning to end with whole-of-society approach


Summary

While The Gambia focuses on a ‘whole-of-government’ approach primarily targeting government institutions, ECOWAS advocates for a broader ‘whole-of-society’ approach that includes civil society, private sector, academia, and citizens from the beginning.


Topics

Development | Human rights


Unexpected differences

Language preservation in data governance

Speakers

– Dr. Ismaila Ceesay
– Audience

Arguments

The issue with language is a bit complex because Africa has over 2,000 languages. Some countries have 56 languages. Some have 200 languages. So for us, just like Serbia, we haven’t really considered this yet


SOLID protocol and LingoAI can address cross-border data governance issues while preserving indigenous languages and preventing digital colonization


Explanation

An unexpected disagreement emerged around the feasibility and priority of preserving indigenous languages in data governance frameworks. While the audience member emphasized the importance of preventing digital colonization through language preservation, the Minister from The Gambia expressed skepticism about the practical implementation given Africa’s linguistic diversity.


Topics

Human rights | Sociocultural | Legal and regulatory


Overall assessment

Summary

The discussion revealed relatively low levels of fundamental disagreement among speakers, with most conflicts centered around implementation approaches rather than core principles. Main areas of disagreement included data localization strategies, stakeholder engagement scope, and practical approaches to cross-border cooperation mechanisms.


Disagreement level

Low to moderate disagreement level. The speakers generally agreed on fundamental principles of data governance, human rights protection, and the need for regional cooperation. Disagreements were primarily tactical rather than strategic, focusing on ‘how’ rather than ‘what’ or ‘why’. This suggests a mature policy environment where stakeholders share common goals but may have different preferred pathways to achieve them. The implications are positive for international cooperation, as the shared foundation provides a basis for compromise and collaborative solutions.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers advocate for institutional approaches that balance data protection with transparency and access to information, with dedicated bodies handling both responsibilities

Speakers

– Dr. Ismaila Ceesay
– Milan Marinovic

Arguments

Ministry of Information plays critical role in fostering public trust through digital literacy, transparency, and stakeholder dialogue


Two-in-one system protecting both personal data and free access to public information allows balancing competing rights


Topics

Human rights | Legal and regulatory


Both emphasize the need for practical, incremental approaches to data sharing that start with clear categorization and small-scale pilots before scaling up

Speakers

– Folake Olagunju
– Meri Sheroyan

Arguments

Countries must distinguish between sensitive and less sensitive data categories to facilitate responsible data sharing


Piloting small-scale data-sharing initiatives for cross-border public service delivery in consular, migration, or environmental areas


Topics

Legal and regulatory | Infrastructure


Both speakers highlight how their regions face challenges from regulatory fragmentation and different approaches to data governance among neighboring countries

Speakers

– Tattugal Mambetalieva
– Olga Kyryliuk

Arguments

Kyrgyzstan avoids centralization and localization of data unlike neighboring countries, reducing risks to data protection


Southeastern Europe faces regulatory divide between EU member states operating under GDPR and non-EU countries still seeking compliance


Topics

Legal and regulatory | Human rights


Takeaways

Key takeaways

Harmonization of data governance frameworks across regions is critical, but should focus on harmonization rather than homogenization to respect local contexts and nuances


Parallel balanced development of digitalization and data protection systems is essential – they cannot exist without each other and must grow together


Multi-stakeholder engagement involving government, civil society, private sector, academia, and citizens is fundamental to successful data governance implementation


Cross-border data flows require building trust frameworks and legal interoperability rather than identical laws across jurisdictions


Regional organizations like ECOWAS, African Union, and regional IGFs play crucial roles in facilitating dialogue and coordination between member states


Capacity building, institutional strengthening, and bridging digital divides remain persistent challenges across all regions discussed


Data protection authorities need stronger international cooperation mechanisms to address cross-border data governance challenges effectively


Resolutions and action items

Commissioner Marinovic to send emails to all DPAs worldwide next week proposing creation of an E-association of DPAs for global cooperation


CDIG to host a side meeting or session with DPAs during their October meeting in Athens to advance inter-regional dialogue


ECOWAS to establish controlled test environments for member states to trial interoperable platforms in sectors like health, education, and identity systems


The Gambia to finalize data protection legislation by end of year and merge access to information commission with future data protection authority


Central Asia countries to develop intergovernmental agreement on data exchange with openness for other countries to join


Armenia to pilot small-scale cross-border data-sharing initiatives in consular, migration, or environmental areas


African countries to establish continental data governance framework and increase representation in global standard-setting bodies like ISO, IEEE, and ITU


Unresolved issues

How to effectively handle indigenous language preservation and data sovereignty concerns in the context of AI and large language models


Balancing privacy rights with freedom of information and expression rights in harmonized frameworks


Addressing the regulatory divide between EU member states and non-EU countries in Southeastern Europe for seamless data cooperation


Managing the complexity of over 2,000 languages across Africa in data governance frameworks


Establishing clear protocols for distinguishing between sensitive and non-sensitive data categories across different jurisdictions


Creating adequate safeguards against digital colonization while enabling beneficial cross-border data flows


Developing capacity and infrastructure in countries without existing IGFs or mature institutional frameworks


Suggested compromises

Two-in-one system combining data protection and access to information oversight in single authority to balance competing rights (as implemented in Serbia and planned for The Gambia)


Using colonial languages (English, French, Spanish, Portuguese) as interim solution for African language data governance while working toward indigenous language solutions


Creating trustworthy equivalence rather than identical laws for mutual recognition of data protection frameworks


Establishing formal cooperation channels between countries’ data protection agencies with clear protocols for audits and enforcement coordination


Starting with bilateral one-to-one meetings between stakeholders before expanding to multilateral dialogue in countries without established frameworks


Mapping regulatory bottlenecks in cross-border data sharing to identify specific areas for targeted bilateral and multilateral cooperation


Thought provoking comments

Protection of personal data, as well as the right to privacy in general, is one of the most threatened fundamental human rights in today’s era of rapid development of modern technologies… Just as a day cannot exist without night, summer without winter, so the processing of personal data cannot exist without its protection.

Speaker

Milan Marinovic (Commissioner, Serbia)


Reason

This philosophical framing elevated the discussion from technical compliance to fundamental human rights, using powerful metaphors to illustrate the inseparable relationship between data use and protection. It challenged the common view that privacy and innovation are in tension.


Impact

This comment shifted the entire tone of the discussion from technical implementation to rights-based approaches. It influenced subsequent speakers to frame their responses in terms of balancing rights rather than just regulatory compliance, and set up the foundation for later discussions about balancing privacy with other human rights like freedom of information.


My idea is to form an association of DPAs named E-association of DPAs from all over the world on a global level in an online format… I plan next week to send to all DPAs in the world email in which I will explain the idea of creating an association and ask them did they support this idea.

Speaker

Milan Marinovic (Commissioner, Serbia)


Reason

This was a concrete, actionable proposal that moved beyond theoretical discussion to practical implementation. It demonstrated how regional cooperation could scale to global cooperation and showed initiative in creating new institutional frameworks.


Impact

This proposal energized the discussion and influenced other panelists to think more concretely about actionable steps. It led to Olga offering to host a side meeting during CDIG, showing how one concrete proposal can catalyze additional collaborative initiatives.


It’s not just about a whole of government. I understand why The Gambia is doing a whole of government, but for us at the regional perspective, we’re looking at a whole of society because this is absolutely vital… It’s about harmonisation at the regional level, but not homogenisation.

Speaker

Folake Olagunju (ECOWAS)


Reason

This distinction between ‘whole of government’ and ‘whole of society’ approaches was intellectually significant, recognizing that data governance affects everyone, not just government entities. The harmonization vs. homogenization distinction was particularly nuanced, acknowledging the need for common standards while respecting local contexts.


Impact

This comment broadened the scope of the discussion to include all stakeholders and influenced how other speakers conceptualized inclusive governance. It also provided a framework for thinking about regional cooperation that respects sovereignty while enabling interoperability.


Kyrgyzstan doesn’t use centralization and localization of data. Centralization of data has risks for data protection and localization of data creates additional burden to business. This approach differs from many neighboring countries like Kazakhstan and Uzbekistan where data centralization and localization of data is used.

Speaker

Tattugal Mambetalieva (Kyrgyzstan)


Reason

This was a bold counter-narrative to the common assumption that data localization equals data sovereignty. It challenged conventional wisdom by arguing that decentralization might actually be better for both privacy and business, offering a different model from regional neighbors.


Impact

This comment introduced complexity to the discussion about data sovereignty approaches and showed that there isn’t one-size-fits-all solution. It prompted reflection on different models and their trade-offs, contributing to a more nuanced understanding of policy options.


So, the data will run out of your countries, and your people and the country don’t own the data. This is called digital colonization. So, the new protocol and the solid and the lingual AI is helping to anti, you know, this kind of a digital colonization.

Speaker

Audience member (Singapore IGF)


Reason

The introduction of ‘digital colonization’ as a concept was provocative and reframed data governance as an anti-colonial struggle. This connected historical power dynamics to contemporary digital issues, particularly relevant for the Global South participants.


Impact

This comment resonated strongly with the moderator and several panelists, as evidenced by the moderator’s response: ‘Data colonization and data sovereignty are key issues in our conversation from where many of us come from.’ It added a critical perspective that connected technical discussions to broader issues of global power and equity.


I, as someone who deals with the protection of personal data, feel like a cat at a dog’s exhibition.

Speaker

Milan Marinovic (Commissioner, Serbia)


Reason

This humorous but insightful metaphor captured the tension that privacy advocates often feel in technology-focused discussions. It acknowledged the challenge of being the ‘voice of caution’ in innovation-driven environments while doing so with self-awareness and humor.


Impact

This comment created a moment of levity that made the discussion more relatable and human. It also established Marinovic as someone who could balance serious concerns with approachable communication, which may have made his subsequent technical proposals more palatable to the audience.


Overall assessment

These key comments fundamentally shaped the discussion by elevating it from a technical policy exchange to a more philosophical and rights-based dialogue. Marinovic’s human rights framing and metaphors set a tone that influenced how other speakers approached the topic, while his concrete proposal for a global DPA association provided a practical anchor for the theoretical discussions. The ‘whole of society’ vs ‘whole of government’ distinction broadened the scope of consideration, and the digital colonization concept added critical depth about power dynamics. Together, these comments created a multi-layered conversation that balanced philosophical foundations, practical proposals, inclusive approaches, and critical perspectives on global digital governance. The discussion evolved from individual country reports to collaborative problem-solving, with participants building on each other’s insights to develop more nuanced and actionable approaches to cross-border data governance.


Follow-up questions

How can regions effectively balance harmonization with homogenization when developing cross-border data governance frameworks?

Speaker

Folake Olagunju


Explanation

This addresses the challenge of creating unified regional standards while respecting individual country nuances and sovereignty


What specific mechanisms can be established to create mutual recognition of data protection frameworks between countries with different legal systems and institutional maturity levels?

Speaker

Wairagala Wakabi


Explanation

This explores how countries can work together despite having different levels of development in their data protection systems


How can the proposed E-association of DPAs be structured and implemented to facilitate global cooperation among data protection authorities?

Speaker

Milan Marinovic


Explanation

This follows up on the Commissioner’s initiative to create a global online association of data protection authorities for knowledge sharing and cooperation


What are the practical steps for implementing controlled test environments for interoperable platforms across member states in different sectors?

Speaker

Folake Olagunju


Explanation

This addresses the need for pilot programs to test cross-border data sharing in sectors like health, education, and identity systems


How can countries with over 2,000 indigenous languages effectively implement language-preserving AI and data governance protocols?

Speaker

Dr. Ismaila Ceesay and Singapore IGF representative


Explanation

This explores the challenge of preserving linguistic diversity while implementing modern data governance frameworks


What is the level of awareness and potential for adoption of the SOLID protocol across different regions for addressing data sovereignty and digital colonization?

Speaker

Singapore IGF representative


Explanation

This investigates how emerging technologies can help countries maintain control over their data while enabling cross-border flows


How can countries without established IGFs or local internet governance structures initiate data governance conversations and which stakeholders should lead this process?

Speaker

Participant from The Bahamas


Explanation

This addresses the practical challenge of starting data governance initiatives in countries with limited existing infrastructure


How can data protection laws be designed to ensure all human rights are respected, particularly balancing privacy rights with freedom of information and expression?

Speaker

Joseph from Wikimedia Foundation


Explanation

This explores the complex challenge of protecting multiple human rights simultaneously without one undermining another


What specific criteria should be used to distinguish between sensitive and non-sensitive data categories at both national and regional levels?

Speaker

Folake Olagunju and Meri Sheroyan


Explanation

This addresses the need for clear categorization systems to facilitate appropriate data sharing while maintaining security


How can small-scale pilot initiatives for cross-border public service delivery be designed and implemented in areas like consular services, migration, and environmental cooperation?

Speaker

Meri Sheroyan


Explanation

This explores practical approaches to testing cross-border data sharing through specific use cases


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #376 Elevating Childrens Voices in AI Design

WS #376 Elevating Childrens Voices in AI Design

Session at a glance

Summary

This workshop, titled “Elevating Children’s Voices in AI Design,” brought together researchers, experts, and young people to discuss the impact of artificial intelligence on children and how to make AI development more child-centric. The session was sponsored by the Lego Group and included participants from the Family Online Safety Institute, the Alan Turing Institute, and the UN’s Center for AI and Robotics. The discussion began with powerful video messages from young people across the UK, who emphasized that AI should be viewed as a tool to aid rather than replace humans, while highlighting concerns about privacy, environmental impact, and the need for ethical development.


Stephen Balkam from the Family Online Safety Institute presented research showing that, unlike previous technology trends, teens now believe their parents know more about generative AI than they do. The research revealed that while parents use AI mainly for analytical tasks, teens focus on efficiency-boosting activities like proofreading and summarizing. Both groups expressed concerns about job loss and misinformation, though they remained optimistic about AI’s potential for learning and scientific progress. Maria Eira from UNICRI shared findings from a global survey indicating a lack of awareness among parents about how their children use AI for personal purposes, and noted that parents who regularly use AI themselves tend to view its impact on children more positively.


Dr. Mhairi Aitken from the Alan Turing Institute presented research funded by the Lego Group showing that about 22% of children aged 8-12 use generative AI, with significant disparities between private and state-funded schools. The research found that children with additional learning needs were more likely to use AI for communication, and that children showed strong preferences for traditional tactile art materials over AI-generated alternatives. Key concerns raised by children included bias and representation in AI outputs, environmental impacts, and exposure to inappropriate content. The discussion concluded that AI systems are not currently designed with children in mind, echoing patterns from previous technology waves, and emphasized the need for greater transparency, child-centered design principles, and critical AI literacy rather than just technical understanding.


Keypoints

## Major Discussion Points:


– **Children’s Current AI Usage and Readiness**: Research reveals that children aged 8-12 are already using generative AI (22% reported usage), but AI systems are not designed with children in mind. This creates a fundamental mismatch where children are adapting to adult-designed systems rather than having age-appropriate tools available to them.


– **Parental Awareness and Communication Gaps**: Studies show significant disconnects between parents and children regarding AI use. While parents are aware of academic uses, they often don’t know about more personal uses like AI companions. Parents who regularly use AI themselves tend to view its impact on children more positively, highlighting the importance of parental AI literacy.


– **Equity and Access Concerns**: Research identified stark differences in AI access and education between private and state-funded schools, with children in private schools having significantly more exposure to and understanding of generative AI. This points to growing digital divides that could exacerbate existing educational inequalities.


– **Children’s Rights and Ethical Considerations**: Young people expressed sophisticated concerns about AI bias, environmental impact, and representation in AI outputs. Children of color became upset when not represented in AI-generated images, sometimes choosing not to use the technology as a result. There’s a strong call for children’s voices to be included in AI development and policy decisions.


– **Design and Safety Challenges**: The discussion emphasized that AI systems need to be designed with children’s wellbeing from the start, not retrofitted later. Key concerns include inappropriate content exposure, emotional dependency on AI companions, and the need for transparency about how AI systems work and collect data.


## Overall Purpose:


The workshop aimed to elevate children’s voices in AI design and development by presenting research on how AI impacts children, sharing direct perspectives from young people, and advocating for child-centric approaches to AI development. The session sought to demonstrate that children have valuable insights about AI and should be meaningfully included in decision-making processes about technologies that will significantly impact their lives.


## Overall Tone:


The discussion maintained a consistently serious yet optimistic tone throughout. It began with powerful, articulate messages from young people that set a respectful, non-patronizing approach to children’s perspectives. The research presentations were delivered in an academic but accessible manner, emphasizing both opportunities and concerns. The panel discussion became increasingly collaborative and solution-focused, with participants building on each other’s insights. The presence of young participants (like 17-year-old Ryan) reinforced the workshop’s commitment to including youth voices, and the session concluded on an empowering note with the quote “the goal cannot be the profits, it must be the people,” emphasizing the human-centered approach needed for AI development.


Speakers

**Speakers from the provided list:**


– **Online Participants** – Young people from across the UK sharing their views on generative AI (names not disclosed for safety reasons)


– **Dr. Mhairi Aitken** – Senior Ethics Research Fellow at the Alan Turing Institute, leads the children and AI program


– **Leanda Barrington‑Leach** – Executive Director of the Five Rights Foundation


– **Participant** – Multiple unidentified participants asking questions from the audience


– **Maria Eira** – AI expert at the Center for AI and Robotics at the United Nations Interregional Crime and Justice Research Institute (UNICRI)


– **Adam Ingle** – Representative from the Lego Group, workshop moderator and convener


– **Stephen Balkam** – Founding CEO of the Family Online Safety Institute (FOSI)


– **Mariana Rozo‑Paz** – Representative from DataSphere Initiative


– **Joon Baek** – Representative from Youth for Privacy, a youth NGO focused on digital privacy


– **Co-Moderator** – Online moderator named Lisa


**Additional speakers:**


– **Ryan** – 17-year-old youth ambassador of the OnePAL Foundation in Hong Kong, advocating for digital sustainability and access


– **Elisa** – Representative from the OnePile Foundation (same organization as Ryan)


– **Grace Thompson** – From CAIDP (asked question online, mentioned by moderator)


– **Katarina** – Law student in the UK studying AI law (asked question online)


Full session report

# Elevating Children’s Voices in AI Design: A Comprehensive Workshop Report


## Executive Summary


The workshop “Elevating Children’s Voices in AI Design,” sponsored by the Lego Group, brought together leading researchers, policy experts, and young people to address the critical gap between children’s experiences with artificial intelligence and their representation in AI development decisions. The session featured participants from the Family Online Safety Institute, the Alan Turing Institute, and UNICRI (United Nations Interregional Crime and Justice Research Institute), alongside direct contributions from young people across the UK and internationally.


The discussion revealed a fundamental challenge: whilst children are already using generative AI at significant rates, AI systems are not designed with children’s needs, safety, or wellbeing in mind. This pattern mirrors previous technology rollouts where child safety considerations were retrofitted rather than built in from the start. The workshop established that children possess sophisticated understanding of AI’s implications and valuable insights for its development, emphasizing the need for meaningful youth participation in AI governance.


## Opening Perspectives: Children’s Voices on AI


The workshop opened with compelling video messages from young people across the UK who articulated sophisticated perspectives on AI’s potential and risks. These participants emphasized that AI should be viewed as a tool to aid rather than replace humans, stating: “AI is extremely advantageous when used correctly. But when misused, it can have devastating effects on humans.”


The young participants demonstrated remarkable awareness of complex issues surrounding AI development. They highlighted concerns about privacy, describing it as “a basic right, not a luxury,” and showed deep understanding of environmental impacts, noting that “AI training requires massive resources including thousands of litres of water and extensive GPU usage.” They asserted their right to meaningful participation in AI governance: “Young people like me must be part of this conversation. We aren’t just the future, we’re here now.”


Their perspectives on education were particularly nuanced, advocating that “AI should be taught in schools rather than banned, with focus on critical thinking and fact-checking skills.” This position demonstrated their understanding that prohibition is less effective than education in preparing young people for an AI-integrated world.


The session also referenced the Children’s AI Summit, which produced a “Children’s Manifesto for the Future of AI” featuring contributions from young people including Ethan (16) and Alexander, Ashvika, Eva, and Mustafa (all 11).


## Research Findings: Current State of Children’s AI Use


### Family Online Safety Institute Research


Stephen Balkam from the Family Online Safety Institute (FOSI), a 501c3 charitable organization, presented research that revealed an unusual pattern in technology adoption. For the first time, teenagers reported that their parents knew more about generative AI than they did, primarily because parents were learning AI for workplace purposes.


The research revealed distinct usage patterns between generations. Parents primarily used AI for analytical tasks related to their professional responsibilities, whilst teenagers focused on efficiency-boosting activities such as proofreading and summarizing academic work. However, concerning trends emerged showing that students were increasingly using generative AI to complete their work entirely rather than merely to enhance it.


Both parents and teenagers expressed shared concerns about job displacement and misinformation, though they remained optimistic about AI’s potential for learning and scientific progress. Data transparency emerged as the top priority for both groups when considering AI companies.


Stephen also conducted an interactive demonstration with the audience, showing AI-generated versus real images, including examples from Google’s new Veo video generator, to illustrate the increasing sophistication of AI-generated content and the challenges this poses for detection.


### UNICRI Global Survey Insights


Maria Eira from UNICRI’s Centre for AI and Robotics shared findings from a survey published three days prior to the workshop, covering 19 countries across Europe, Asia, Africa, and the Americas. The research revealed significant communication gaps between parents and children regarding AI use. While parents demonstrated awareness of their children’s academic AI applications, they often remained unaware of more personal uses, such as AI companions or seeking help for personal problems.


The research identified a crucial correlation: parents who regularly used generative AI themselves felt more positive about its impact on their children’s development. This finding suggested that familiarity with technology shapes attitudes toward children’s use.


Eira’s research also highlighted the need for separate legislative frameworks specifically targeting children’s AI rights, recognizing that children cannot provide the same informed consent as adults and face unique vulnerabilities in AI interactions.


### Alan Turing Institute Children and AI Research


Dr. Mhairi Aitken presented research on children’s direct experiences with AI, funded by the Lego Group. The study found that approximately 22% of children aged 8-12 reported using generative AI, with three out of five teachers incorporating AI into their work. However, the research revealed stark disparities in access and understanding between private and state-funded schools, pointing to emerging equity issues.


The research uncovered particularly significant findings regarding children with additional learning needs, who showed heightened interest in using AI for communication and support. This suggested AI’s potential for inclusive education, though Dr. Aitken emphasized that development must be grounded in understanding actual needs rather than technology-first approaches.


When given choices between AI tools and traditional materials for creative activities, children overwhelmingly chose traditional tactile options. They expressed that “art is actually real” whilst feeling they “couldn’t say that about AI art because the computer did it, not them.” This preference revealed children’s sophisticated understanding of authenticity and creativity.


The research also documented concerning issues with bias and representation in AI outputs. Children of color became upset when not represented in AI-generated images, sometimes choosing not to use the technology as a result. Similarly, children who learned about the environmental impacts of AI models often decided against using them.


## Panel Discussion and Key Themes


### Design and Safety Challenges


The panel discussion revealed that AI systems fundamentally fail to consider children’s needs during development. Stephen Balkam noted that this pattern repeats previous web technologies where safety features were retrofitted rather than built in from the start. Dr. Aitken emphasized that the burden should be on developers and policymakers to make systems safe rather than expecting children to police their interactions.


Particular concerns emerged around AI companions and chatbots, with evidence that young children were forming emotional attachments to these systems and using them for therapy-like conversations. This raised questions about potential dependency and isolation from real community connections.


### Educational Impact and Equity


The research revealed troubling equity gaps in AI access and education. Children in private schools demonstrated significantly more exposure to and understanding of generative AI compared to their peers in state-funded schools, suggesting that AI could exacerbate existing educational inequalities.


However, the discussion also highlighted AI’s potential for supporting inclusive education, particularly for children with additional learning needs who showed interest in using AI for communication support.


### Privacy, Transparency, and Rights


Data protection emerged as a fundamental concern across all speakers. The young participants’ assertion that privacy is a basic right was echoed by researchers who emphasized the need for transparency about AI system operations and data collection practices. Stephen Balkam noted the ongoing challenge of balancing safety and privacy, observing that more safety potentially requires less privacy.


## International Youth Participation


The workshop included international youth participation, notably from 17-year-old Ryan, a youth ambassador of the OnePAL Foundation in Hong Kong, who asked specifically about leveraging generative AI for supporting people with disabilities. Elisa from the OnePile Foundation raised questions about power imbalances between children and AI systems. Zahra Amjed was scheduled to join as a young representative but experienced technical difficulties.


## Areas of Consensus and Ongoing Challenges


Participants agreed on several fundamental principles:


– AI systems must be designed with children’s needs and safety in mind from the outset


– Children must be meaningfully included in AI decision-making processes


– Transparency about data practices and privacy protection are essential requirements


– AI shows significant potential for supporting children with disabilities and additional learning needs


– Environmental responsibility must be considered in AI development


However, several challenges remained unresolved. Maria Eira noted that long-term impacts of AI technology on children remain unclear with contradictory research results. The challenge of creating AI companions that support children without fostering dependency remained unaddressed, and questions about global implementation of AI literacy programs require continued attention.


## Emerging Action Items and Recommendations


The discussion generated several concrete initiatives:


**Immediate Initiatives**: UNICRI announced the launch of AI literacy resources, including a 3D animation movie for adolescents and parent guide, at the upcoming AI for Good Summit.


**Industry Responsibilities**: Technology companies were called upon to provide transparent explanations of AI decision-making processes, algorithm recommendations, and system limitations.


**Educational Integration**: Rather than banning AI in schools, participants advocated for integration with strong emphasis on critical thinking and fact-checking skills.


**Research and Development**: The discussion highlighted needs for funding research on AI literacy programs and designing AI tools with children’s needs prioritized from the start.


**Legislative Approaches**: Participants called for separate legislation specifically targeting children’s AI rights and protections, recognizing children’s unique vulnerabilities in AI interactions.


## Conclusion


The workshop established that the question is not whether children are ready for AI, but whether AI is ready for children. Current systems fail to meet children’s needs, rights, and developmental requirements, necessitating fundamental changes in design approaches, regulatory frameworks, and industry practices.


As Maria Eira emphasized, echoing the sentiment of young participants: “the goal cannot be the profits, it must be the people.” This principle encapsulates the fundamental shift required in AI development—from technology-first approaches toward human-centered design prioritizing children’s rights, wellbeing, and meaningful participation.


The workshop demonstrated that when children’s voices are genuinely heard and valued, they contribute essential perspectives that benefit not only young people but society as a whole. Moving forward, the emphasis must be on meaningful youth participation in AI governance, transparent and child-friendly AI systems, critical AI literacy education, and regulatory approaches that protect children’s rights while respecting their agency.


Session transcript

Adam Ingle: Hi, everyone. Thank you for joining this panel session workshop called Elevating Children’s Voices in AI Design. Sponsored by the Lego Group and also participating is the Family Online Safety Institute, the Almond Turing Institute, and the Center for AI and Robotics at the United Nations Interregional Crime and Justice Research Institute. We’ve got a excellent workshop for you today, where you hear all about insights from the latest research on the impact of AI on children, and also hear from young people themselves about their experiences and their hopes. So this is just a quick run of the show. We’re going to start with a message from the children about their views on generative AI, and then we’re going to hear some of the latest research from Stephen Bolcom, who’s the founding CEO of the Family Online Safety Institute. Maria Ira, who’s an AI expert at the Center for AI and Robotics in the UN. Fari Aitken, a Senior Ethics Research Fellow at the Almond Turing Institute. Then we’ll move on to a panel discussion and questions. Please feel free to ask questions. We want to take them from the audience, both in the room and online. We’ll also have a young person, Zahra Amjed, join us to share her insights and ask the panel questions herself. But without further ado, let’s get away, and we’re going to start with this video message from young people across the UK. We’re not disclosing names just for safety reasons, but please play the message and the video when you’re ready.


Online Participants: AI is extremely advantageous when used correctly. But when misused, it can have devastating effects on humans. That’s why we must view AI as a tool to aid us, not to replace us. Right now, students are memorising facts by adaption. while AI is outpacing that system entirely. Rather than banning AI in schools, we should teach students how to use it efficiently. Skills like fact-checking, critical thinking, and quantum engineering aren’t optional anymore, they’re essential. We need to prepare students for a world where AI is everywhere, teaching them to use it efficiently while not relying on itself. I feel that AI can help humanity in the future, but it also can harm, so it must be used in an ethical manner. I find AI really fun, but sometimes it’s not safe for children because it gives bad advice. Privacy is not a luxury, it’s a basic right. The data that AI collects is valuable, and if it’s not protected, it can be used to hurt the very people it’s supposed to help. If gold cannot be profit, it must be people. LLMs include thousands of litres of water during training, and GPT-3 require over 10,000 GPUs over 15 days. Hundreds of LLMs are being developed, and their environmental impact is immense. But all powerful tools, AI must be managed responsibly, or it’s promised to become a problem. The choices that government and AI developers make today will not just affect the technology, but our lives, our communities, and the world that we leave for our next generation. Young people like me must be part of this conversation. We aren’t just the future, we’re here now. Our voices, our experiences, and our hopes must matter in shaping this technology. I think adults should listen to children more because children have lots of good ideas, as well as adults, with AI. Artificial intelligence is a rising tide, but tides follow the shape of the land, so we must shape that land. We must set the direction, and we must act. to decide, together, the kind of world that we want to build. Because if we don’t, that tide may wash away everything that we value most. Fairness, privacy, truth, and even trust. AI holds this incredible promise, but that promise will only be fulfilled if we build it with trust, with care, with respect, and with a clear vision of the kind of world that we want to create, together. Thank you.


Adam Ingle: Well, thank you so much to all the young people there that put together those pretty powerful messages. I mean, from our perspective at the Lego Group, and also I know from all my co-panelists, this is all about elevating children’s voices and not being patronizing to their views, making sure they’re part of decision-making. And it’s great to see such eloquent young people who have real ideas about the future of AI, and we’re here to kind of discuss them more. I’m gonna pass over to Stephen Balkin now to talk about his latest research from the Family Online Safety Institute about the impact of AI on children.


Stephen Balkam: Well, thank you very much, Adam, and thank you for convening us and bringing us here. Really appreciate it. For those of you who are not familiar, FOSI, the Family Online Safety Institute, we are a 501c3 charitable organization based in the United States, but we work globally. Our mission is to make the online world safer for kids and their families. And we work in what we call the three Ps of policy, practices, and parenting. So that’s enlightened public policy, good digital practices, and digital parenting, which is probably the most difficult part of this, where we try to empower parents to confidently navigate the web with their kids. And the web increasingly is AI. infused, shall I say. I want to begin by just saying that two years ago, in 2023, we conducted a three-country study called Generative AI Emerging Habits, Hopes, and Fears. And at the time, we believe it was the first survey done around generative AI, given that ChatGPT had emerged only a few months before. And we talked to parents and teens in the U.S., in Germany, and in Japan, and some of the results surprised us. And you can see in the slide, and I’ll talk to those data points. First thing that we found which surprised us was that teens thought that their parents knew more about generative AI than they did. With previous trends, particularly in the early days of the web, and then web 2.0, and social media, kids were always way ahead of their parents in terms of the technology. But in this case, a large, sizable share of teens in all three countries recorded that their parents had a better understanding than they did. And we dug a little deeper and found that, of course, many of the parents were struggling to figure out how to use gen AI at work, or at the very least, try to figure it out before gen AI took over their jobs. But anyway, that was the first interesting trend. Parents, for their part, said that they used it mainly for analytical tasks, such as using gen AI platforms as a search engine and as a language translator. And that’s only increased over the last couple of years. Teens mostly were looking for it for efficiency boosting tasks, such as proofreading and summarizing long texts. to make them shorter and faster to read. And we’ve already seen some interesting developments in those two years where ChatGPT is actually, instead of just being used to proofread and analyze their work, teens and young people are increasingly using Gen AI to do their work for them, their essays, their homework, whatever. In terms of concerns, job loss was the number one concern for both parents and teens, and also the spread of false information, which has only been accelerating since we did that study. Other concerns, loss of critical thinking skills was the parents’ number three, whereas kids were more concerned about new forms of cyberbullying, again, which is something we’ve been seeing since we did that study. There was a lot of excitement, too. I mean, obviously concerns, but parents and teens both shared an optimism that Gen AI will, above all else, help them learn new things. Very excited also for AI’s potential to bring progress in science and healthcare, and to free up time to reducing boring tasks as well as progress in education. But then when we asked them about who was responsible for making sure that teens had a safe Gen AI experience, interestingly enough, parents believed that they were the most, had to take the greatest responsibility for ensuring their teen’s safety. And this was particularly true in the United States where, I’m afraid to say, we have less trust in our government to guide and to pass laws. Other countries were more heavily reliant on their own governments. and tech companies. And then we asked the question, what do parents and teens want to learn? And what are the topics that would help them navigate these conversations and address their concerns about Gen AI more broadly? And top of the list was transparency of data practices. And secondly, steps to reveal what’s behind Gen AI and how data is sourced and whether it can be trusted was a key element. Another area they felt that industry should take note of, that data transparency is top of mind for parents and teens, and that companies should take strides to be more forthcoming about how users’ data is being collected and used, which I think is something that we’ll hear more about in the next presentation. And then fast forward to this year, we conducted what we call the Online Safety Survey, now titled Connected and Protected, in the United States in the end of 2024 and into 2025. And this was a survey more about online safety trends in general, but we did include questions about Gen AI in the research. And a basic question, do you think that AI will have a positive or negative impact on each of the following areas? And these areas were creativity, academics, media literacy, online safety, and cyberbullying. And in each of these categories, kids were more likely to be optimistic about AI’s impact on society. Think about that. Parents felt more optimistic than their parents that AI was going to have a positive impact. Now parents weren’t necessarily pessimistic. across the board, about half of parents thought that AI would have a positive impact, but 60% of both parents and kids thought AI would have a negative impact on cyberbullying. And this, of course, is where we see stitched together videos, a kid’s face put onto all sorts of awful graphic images that are then spread around the school. When it comes to online safety, parents and kids were split down the middle, with just over half of both groups reporting that AI would have a positive impact on online safety. And when comparing data from wave one of the survey with wave two, we saw that parents in the second wave were much more likely to say that their child had used Gen AI for numerous tasks, including help with school projects, image generation, brainstorming, and more. In the first wave of this survey, we asked participants to identify if images were real or AI generated. Each respondent was presented with three images from a lineup of six to ensure accurate data. Less than half of respondents correctly identified two or more images, and you’re going to see an example of that in a moment. Less than 10% of respondents correctly identified all three images. And we’ll see how well you guys do in a minute. On the bright side, over four or five respondents correctly identified at least one image. And again, this survey was done before Google’s video generator came out, Yeho, which is just mind boggling how fast the developments are in this space. And some of the videos and images that have come out of that video generator are quite astounding. So based on this study, Fossey recommends the following. That technology companies be much more transparent about AI technology, providing families with a clear explanation of why a Gen AI system produced a certain answer, why an algorithm is recommending certain content, and what the limitations of AI tools like chatbots are. Industries should also learn from past mistakes and design AI tools with children in mind, not as an afterthought. And industry needs to fund research and programs that will help children learn AI literacy so they are better able to discern real content from AI-generated content and make informed decisions based on that knowledge. So now I’m going to test you guys on these three images and have a look and just have a show of hands. I don’t know how we’re going to do this online. But how many of you think that the first image is real? Any takers for real? Okay. How many for AI-generated? All right. More real than AI. Okay. Second one. AI? Real? All right. And the last one, real? Or AI? All right. Well, you guys did pretty well. The first one is a real painting. I’ve got the actual citation for you if you want to find out who the artist was. And yes, the second two were both gen A, AI-generated. Interestingly enough in our study, more men than women thought number two was real. Maybe that was wishful thinking. You can make your own conclusions. I think 85 to 90% of women immediately saw that she was not real. And if you look closely, her earrings don’t match, which again, I didn’t see that. So, anyway, back to you.


Adam Ingle: Thanks, Stephen. I performed poorly on that test, I will admit. So next up we’ve got Maria, and she’s an AI expert at the United Nations… Sorry, it’s a complex acronym. The United Nations Interregional Crime and Justice Research Institute and their Center for AI and Robotics. Maria, please take it away. She’s joining us online.


Maria Eira: Hello, everyone. Can you hear me and see my slides? Everything is working? Yes. Perfect. Thank you so much, Adam. And good afternoon, everyone. First of all, I would like to thank you, Adam, and the Lego Group for the invitation to be part of this very interesting workshop. So I work at the Center for AI and Robotics of UNICRE. Indeed, it’s a complex, long term for a UN research institute that focuses on reducing crime and violence around the world. And the center has a particular mandate to understand how AI contributes to both reduce crime and also to be… How it can also be used by malicious actors for criminal purposes, for example. And so now I will present you a project that we have together with Walt Disney to promote AI literacy among children and parents. And we focus on AI, but particularly on generative AI. So to start this project, we were trying to understand the parental perspectives on the use and impact of gen AI on adolescents, a little bit as Palsy was doing. So we distributed a survey worldwide and received… The survey was targeting parents and we received replies from 19 countries across Europe, Asia, Africa and the Americas. So, we just published this paper three days ago. The paper includes all the conclusions from this survey. It’s free access and you can access it via the QR code, but I already brought you here the main conclusions from this survey. So we had two main conclusions. So the first one, we understood that there is a lack of awareness from parents and the low communication between parents and their children on how adolescents are using generative AI tools. And we were targeting parents of adolescents of 13 to 17 years old. And so on the left, we have a graph, I don’t know if you can see it, but I will describe a little bit. So this graph is parents’ insights on teenagers’ generative AI use across different activities. And so on the first smaller graph we have, the activity is to search or get information about the topic. And so we can see that more than 80% of parents report that their kids are using generative AI to search information about the topic. And they are also using it quite often to help with school assignments. So for more academic purposes, we can see that parents are aware that their kids are using generative AI. However, for more personal uses, such as using generative AI as a companion or to ask for help to personal or health problems, we can see that the most popular reply was either I disagree. So they feel that their kids never use generative AI for these more personal purposes. And the second most popular reply was, I don’t know. So, this confirms a little bit, although we were saying right now that parents are becoming more aware. But still, we can see that as a worldwide distribution, a lot of parents still don’t know if their children are using generative AI for more personal uses. The second conclusion is we can see here on the graph on the right. And so, we started by, it’s basically, we understood that parents who use, I’m already giving the conclusion. So, parents who use generative AI tools feel more positive about the impact that this technology can have on their children’s development. And so, we can see on the graph on the right, so we have started by dividing parents according to their familiarity with generative AI tools. And so, we divide it into regular users, the ones who use generative AI every day or a few times per week, sporadic users, the ones who use generative AI a few times per month or less, and unfamiliar audience who never tried or never heard about this technology. And so, we can see that the regular users, so the yellow bars here, so feel much more positive on the impact that the technology can have on critical thinking, on their career, on their social work, and also on their general impact that this technology can have on kids’ development. And so, the child and familiar parents, so the blue ones here, were negative in all these fields. So, this shows that when parents are familiar with the technology, when they use the technology, they see it differently. And thinking… And viewing this technology in a positive way also helps children to use it in a more positive way and not fearing this technology so much. And so besides engaging with parents, we also engaged with children and we organized a workshop in a high school to collect the perspectives from the adolescents. And I brought here some interesting comments and feedback from children. So when we asked them where did they learn about generative AI, they mentioned France, they mentioned TikTok, my 20-year-old brother. So we can see that they are not learning how to use these tools in schools or from other trustworthy sources, let’s say. And when we asked them what’s one thing that adults should know about how teenagers are using generative AI, their replies were they use it to cheat in school, kids use AI to make everything, or adults should know more about it. And I think these were also very interesting to see their feedback. And it also helped us a lot to develop the main outcomes of this project. So we basically produced two AI literacy resources that will be launched in two weeks at the AI for Good Summit. So on the left, we have a 3D animation movie for adolescents that explains what AI is, how generative AI works, and very importantly, that not all the answers can be found in this chat box. And on the right, we have a guide for parents on AI literacy to support them in guiding their children to use this technology in a responsible way. So to communicate, so we focus a lot on the communication. which was something that we concluded from the initial survey, focusing on the communication about the potential risks and also to explore the benefits of this technology together to make parents engage with children and to learn together, because we are all learning on this. The technology is really advancing in a very fast pace, so we will all need to be on top of this development. So if you’d be interested, both resources will be available online soon, so if you’d like to receive them, just reach out to me. I’ll leave my email here. Also, if you have any other questions, I’m happy to reply. So thank you for your time and attention.


Adam Ingle: Thanks, Mariel. And now we have Varya Atkin, Senior Ethics Fellow at the Alan Turing Institute, to discuss research that LEGO Group was actually very proud to sponsor.


Dr. Mhairi Aitken: Thank you, Adam, and thank you for the invitation to join this discussion today. I’m really excited to be a part of this really important panel discussion. Yes, as Adam said, I’m a Senior Ethics Fellow at the Alan Turing Institute. The Turing Institute is the UK’s national institute for AI and data science, and at the Turing, I have the great privilege of leading a program of work on the topic of children and AI. The central driver, the central rationale behind all our work in the children and AI team at the Turing is the recognition that children are likely to be the group who will be most impacted by advances in AI technologies, but they’re simultaneously the group that are least represented in decision-making about the ways that those technologies are designed, developed, and deployed, and also in terms of policymaking and regulation relating to AI. We think that’s wrong. We think that needs to change. Children have a right to a say in matters that affect their lives, and AI is clearly a matter that is affecting their lives today and will increasingly do so in the future. So over the last four years, our team, the children and AI team at the Alan Turing Institute have been working on projects to develop and demonstrate approaches to meaningfully bring children and young people into decision-making processes around the future of AI technologies. So we’ve had a series of projects of a number of different collaborations, including with UNICEF. with the Council of Europe Steering Committee on the Rights of the Child, the Scottish Airlines and Children’s Parliament and most recently with the with the Lego Group. So I want to share some kind of headline findings from our most recent research which has looked at the impacts of generative AI use on children and particularly on children’s well-being and also share some messages from the Children’s AI Summit which was an event that we held earlier this year. So firstly from our recent research and this is a project that was supported by the Lego Group and looked at the impacts of generative AI use on children particularly children between the ages of 8 and 12. There were two work packages in this project, the first work package was a national survey so we surveyed around 800 children between the ages of 8 and 12 as well as their parents and carers and surveyed a thousand teachers across the UK. Now this research revealed that around a quarter of children, 22% of children between the ages of 8 and 12 reported using generative AI technologies and the majority of teachers, so three out of five teachers reported using generative AI in their work. But we found really stark differences between uses of AI within private schools and state-funded schools and this is in the UK context, with children in private schools much more likely both to use generative AI but also report having information and understanding about generative AI and this points to potentially really important issues around equity in access to the benefits of these technologies within education. We also found that children with additional learning needs or additional support needs were more likely to report using generative AI for communication and for connection and also from the teacher survey we find that there was significant interest in using generative AI to support children with additional learning needs. This was also a finding that came out really strongly in work package two of this research. Work package two was direct engagement with children between the ages of 9 and 11 through a series of workshops in primary schools in Scotland and throughout these workshops we found that children were really excited about the opportunity to learn about generative AI and they were really excited about the ways that generative AI could potentially be used to support them in education and again there was a strong interest particularly in the ways that generative AI could be used to support children with additional learning needs. But we found also that in these workshops where we invited children to take part in creative activities and we gave them the option of using either generative AI tools or more traditional tactile art materials, we found overwhelmingly that children chose to use traditional tactile hands-on art materials. You’ll see on the quote at the bottom, one of the sentiments that was expressed very often in these workshops was this feeling that art is actually real and children felt that they couldn’t say that about AI art because the computer did it, not them. And I think this reveals some really important insights into the choices that children make about using digital technologies and a reminder that those choices are not just about the digital technology, but about the alternative options available and the context and environments in which children are making those choices. Through the research, children also highlighted a number of really important concerns that they had around the impacts of generative AI. And I just want to flag some of these briefly now. One of the major themes that came out through this work was a concern around bias and representation in AI models and the outputs of AI models. Over the course of six full day workshops in schools in Scotland, we were using generative AI tools. And in this case, it was OpenAI’s ChatGPT and DALI to create a range of different outputs. And we found that each time children wanted an image of a person, it would by default create an image of a person that was white and predominantly a person that was male. Children identified this themselves and they were very concerned about this. They were very upset about this. But particularly for children of colour who were not represented through the outputs of these models, we found that children became very upset when they didn’t feel represented. And in many cases, children who didn’t feel represented by the outputs of models chose not to use generative AI in the future and didn’t want to use generative AI in the future. So it’s not just about the impact on individual children. It’s also about adoption of these tools and how representation feeds into that. Another big area of concern was the environmental impacts of generative AI. And this is something that we found has come out really consistently through all the work we’ve done engaging children and young people in discussions around AI. Where children have awareness or access to information about the environmental impacts of generative value models, they often choose not to use those models. And we found that in these workshops, that where children learnt about the environmental impact, particularly the water consumption of generative value models and the carbon footprint of generative value models, they chose not to use those models in the future. And they also pointed to this as an area in which they wanted policy makers and industry to take urgent action to address the environmental impacts of these models, but also to provide transparent, accessible information about the environmental impacts of those models. Finally, there were also big concerns around the ways that generative value models can produce inappropriate and sometimes potentially harmful outputs. And children felt that they wanted to make sure that there were systems in place to ensure that children had access to age-appropriate models and that wouldn’t risk exposure to harmful or inappropriate content. Now, finally, I just wanted to also share some messages from the Children’s AI Summit, which was an event that we held in February of this year. This was an event that my team at the Alan Turing Institute ran in partnership with Queen Mary University of London, and it was supported by the Lego Group, Elevate Great and EY. The event brought together 150 children and young people between the ages of 8 and 18 from right across the UK for a full day of discussions, exploring their hopes and fears around how AI might be used in the future, and also setting out their messages for what they wanted to see on the agenda at the AI Action Summit in Paris. From the Children’s AI Summit, we produced the Children’s Manifesto for the Future of AI, and I’d really urge, encourage you to look it up and have a read. It’s written entirely in the words of the children and young people who took part, and it sets out their messages for what they want world leaders, policymakers, developers to know when thinking about the future of AI. I just want to finish with a couple of quotes from the children and young people who took part in the Children’s AI Summit, and their message is really for you all here today about what needs to be taken on board when thinking about the role of children in these discussions. So firstly from Ethan, who is 16, and he says, Hear us, engage with us, and remember, AI may be artificial, but the consequences of your choices are all too real. And secondly, we have a quote from Alexander, Ashvika, Eva, and Mustafa, who were all aged 11, and they presented jointly at the Children’s AI Summit. And they said, we don’t want AI to make the world a place where only a few people have everything and everyone else has less. I hope you can make sure that AI is used to help everyone to make a safe, kind, and fair world. And I think that sums up the ethos of the Children’s AI Summit perfectly, and is also a mission that we really all need to get behind and make a reality. Thank you.


Adam Ingle: Thanks, Fahri, and to Stephen and Maria as well for just some really exciting research findings. We’re going to move towards a panel session now. So we’ll take questions from the audience, both in person and online. So if you’d like to think about some questions, feel free to then ask them. If you’re online, you can ask the online moderator, Lisa, who will ask those questions for you. I’ve got a few myself, though, and we’re actually waiting for Zahra, our young representative, to join. I think there’s been some technical difficulties there, so hopefully she’ll be joining us soon so we can hear directly from her. But to start things off, I think we heard a lot in the research. Kids are already using AI. Children are already using AI across multiple different contexts for multiple different purposes. I think I just want to take a step back and just ask, are children ready for AI, or is AI ready for children? Just as an open question to all the panellists here.


Dr. Mhairi Aitken: I’ll give that one a go. I mean, I think some of the big challenges that we’re finding so far is that these tools, we know that children of all ages are already interacting with AI on a daily basis. And that starts with infants, preschool kids, playing with smart toys and smart devices in the home, through to generated technologies and the ways that AI is also used online on social media. And a lot of the problems here is that these tools are being used by children and young people of all ages, but they’re not designed for children and young people. And we know that the ways that children interact with AI systems are often very different from how adults engage with those tools, or digital technologies more generally, and often very different from how the designers or developers of those systems anticipate that those tools might be used. And now I think there’s possibly a risk that we then put the kind of the burden or the expectation on children and young people themselves to kind of police those online interactions to take approaches to be safe online, whereas actually, the burden has to be on, you know, the developers, the policymakers and the regulators to make sure that those systems are safe, and that there is, there are age appropriate tools and systems available for children to access and benefit from.


Stephen Balkam: Yeah, this feels like deja vu all over again, I was very much involved in the web 1.0 back in the mid 90s. And it became very clear that the World Wide Web was not designed with kids in mind. And we had to sort of retrofit websites and create parental controls for the first time, but never really caught up. And then web 2.0 came along around 2005 2006. And sites like Myspace, and then Facebook, again, just took off first in colleges, then in high schools, then all the way down to elementary grade school level. Once again, not with kids in mind. And we’re just repeating that one more time with this AI revolution. And there’s a great deal of concern, particularly around the amount of what kids will do in terms of trusting chatbots, for instance, we’re seeing a lot of emotional attachments of quite young kids talking to chatbots, thinking that they are real, and sort of unloading their own personal thoughts to them. And for older teens, and for college based kids. the fact that they’re using Gen AI for doing their work, doing their homework, doing their projects and essays, meaning that they’re not developing critical thinking skills, but going straight to Gen AI for results. And for that, that probably is of greater concern.


Adam Ingle: Thank you, Steven. Maria, do you have any contributions to that question?


Maria Eira: Yeah, I agree, yeah, definitely with everyone that was said. Just adding that not just the AI systems are not ready or the kids are not ready for the AI, but the whole environment. So in terms of AI literacy, most of the people don’t really understand what is AI, how does it work, is it like a type of magic, but at the end of the day, it’s actually just computations and statistical models. And so it’s not just the technology that was not developed, but it’s the whole environment. So in terms of AI literacy in schools and so on.


Adam Ingle: Thank you. I’ve definitely got some more questions, but I can see we have someone in the audience that would like to ask a question. So please introduce yourself and ask the panel.


Mariana Rozo‑Paz: Thank you. Hi, everyone. I’m Mariana from DataSphere Initiative. I hope you can hear me well. Okay. So we are the DataSphere Initiative. We have a youth project that has been engaging young people for a couple of years. And I wanted to thank you all for the amazing presentations and the amazing work that you’re doing. And I think it’s actually very, very important that we have all of these stats, numbers, stories, experiences, and thank you also for starting with a video from children and closing up with quotes. And this introduction is just to say that we’re restarting a new phase in our project and we’re starting to focus on influencers. and not just kids that are becoming influencers, children that are being sometimes turned into influencers by their parents that have also mind-blowing stats. Adults that are becoming influencers and that are directly influencing children, not only to consume and buy their products or other products, but we’re also looking into AI agents as influencers in this digital space and that as I think one of the girls that was sharing her story was saying, it’s not just that they’re influencing or that are generally affecting their lives, digital lives, but it’s actually their very concrete lives and the relationships that they have with each other. So I just wanted to ask, and I think that Stephen was already mentioning a bit around the influence of other children and the maybe even like social media and if you had any questions or research done around how are influencers shaping the space and how children and youth are experiencing social media in general and then how did you start or if you started to ask about AI agents and how is that influencing particularly the relationships that they have in real life again? I think that was a bit of like a lot of questions, but thanks again so much.


Stephen Balkam: Yeah, I’ll try to respond to part of what you were saying. I mean, the technology is moving so fast that it’s incredibly hard for the research to keep up is number one. No, we haven’t yet asked about AI bots being an influencing factor, although we are anecdotally seeing kids, teens, young adults and adults using AI for therapy. I mean, literally talking through on hours at a time deep emotional issues that they have and getting responses from chat GPT and others in a way that is… Very positive and self-reinforcing, but also extremely, potentially dangerous in the sense that an artificial intelligence bot will not know, is not human, and will not be able to pick up on body cues and all the rest of it, and may not actually be able to challenge you in a way that a physical, a real human therapist will. One other point I’ll get to quickly, the whole influencing world, there’s new legislation that’s been popping up in the United States at least, that will at least compensate kids who’ve been part of a family, you know, vlog all their childhood, a bit like kid movie stars were back in the 30s. So now at least they’re getting compensation and a right to delete their videos that they had no true consent to be a part of when they turn 18. But there’s a broader societal question about monetizing our kids. We are not in favor of that, particularly because there’s no way that a 7, 8, 9-year-old can give consent. Yes, please film me every day and post this online so that I can go through college and you don’t have to pay, mom and dad. So anyway, maybe we’ll talk later because you had a lot of different points in there.


Dr. Mhairi Aitken: Maybe I could just pick up on, I guess, how this relates to the growth of AI companions and gender divide in this context. I suppose influence isn’t necessarily something that we’ve looked at so much in our research, which is mostly focused on 8 to 12-year-olds, not to say that they’re not already been influenced and many of them are beginning, certainly 12-year-olds, beginning to be on social media. But AI companions, I think, is an area that we really need to urgently get to grips with. There are more and more of these AI companions, AI personas that are clearly being marketed. towards children and young children and we don’t really yet know what the impacts of that might be but there’s growing research, we need more, we need more action to be taken on this, including AI companions that are marketed as addressing challenges of loneliness but then potentially creating a dependence or a connection to something that is very much kind of outside of society and community and potentially exacerbating those challenges which bring us a particular set of risks to address. In the Children’s AI Summit, which were again children and the Children’s AI Summit between the ages of 8 and 18 and among teenagers at Children’s AI Summit there was a lot of interest in potentially using AI companions to support children in terms of supporting them with mental health and there was a lot of interest in how that could be done but but unfortunately what would it mean to design that and develop these tools in ways that are age-appropriate that are safe, that have children’s well-being and children’s mental health as part of the design process, as a key element in the design process. At the moment the risk is that these tools are being developed and promoted that without children’s well-being and children’s interests in mind in the development process but they are increasingly being relied on and used for those purposes. So I think yeah it’s an area that we’re seeing a lot of interest from from children and young people but with a recognition that this needs to be done responsibly, safely and cautiously. Thanks.


Leanda Barrington‑Leach: Leander, I see you’ve got a question. Please. Hello everyone, I’m Leander Barrington-Leach, Executive Director of the Five Rights Foundation. Thank you so much for the presentations and for the research you’re doing which is absolutely fabulous. I could ask lots of things and I could comment on lots of things but I just wanted to take the opportunity given what you’re saying indeed about the importance of designing AI with children’s rights in mind from the start of raising awareness that there are regulatory and technical tools out there to do this. and in particular the Children and AI Design Code, which the Alan Turing Institute also contributed to, which was work that brought AI experts and children’s rights experts and many others together over a very long period of time to develop a technical protocol for innovation that puts children’s rights at the center. So I just wanted to draw awareness to this to say that we all agree that it’s so important, but to know that there are actually tools out there to make it happen. Thank you.


Co-Moderator: Thanks, Leander. Lisa, I think we’ve got an online question. We do indeed. Katarina, who is studying law in the UK, AI law specifically, you’re asking a question. Should AI ethics for children be separated from general AI ethics? That’s the first question. Second question, do you think there should be state-level legislation or policies for AI systems targeting specifically children? Thank you.


Adam Ingle: Maria, I’ll pass to you first if you want to answer either of those questions.


Maria Eira: Yes, sure. Thank you for your question. It’s very relevant indeed. And definitely, yes. Children should have separated legislation. Separate legislation should target children because children don’t have exactly the same consent. Let’s say, for example, the awareness of consent. There are several principles that cannot be fully applicable from adults to children, so we definitely need to have the child’s rights in mind when developing this legislation.


Adam Ingle: Thanks, Maria. Stephen or Mari, do you want to comment on, just one of you will, because we’ve got a few questions and I do want to get everyone to agree.


Dr. Mhairi Aitken: Yeah, I mean, I would agree that children have particular rights, they have particular needs, unique needs and experiences that should be addressed. I guess one other part of it is that if we design this well for children and if we get the regulatory requirements, policy requirements right for children, this benefits well beyond children as well. An AI system that’s designed well with children in mind is also going to have benefits in terms of other vulnerable users and wider user groups. So I think yes, there are unique perspectives, unique considerations that should be addressed, but the benefits go beyond that.


Adam Ingle: So before I go to other questions in the room, I just got really quick responses from the panel. Leander mentioned the age-appropriate AI design code, which is a tool to help companies think about how to build AI in a child rights and well-being way. What do you think are the research gaps? We’ve got tools like this. What is the one, to your mind, outstanding research gap that needs to be addressed before we can really be confident that there is a child-centric approach to AI development? Just a quick question. Maybe reflect on that as we take some other questions, and then I’ll come back later because I do want to think about the research gaps and a path forward to really understanding how to do this responsibly. So let’s take a question from this gentleman here.


Joon Baek: Hello, my name is June Beck from Youth for Privacy. We are a youth NGO focused on digital privacy. So we want to ask about a lot of children’s rights in AI. At least in the context of privacy, there has been some legislation, for example, where under the aim of protecting children’s data, for example, or safeguarding children online, there’s been concerns about those kind of laws having some privacy issues. I was wondering if, would there be some things that under the aim of protecting children when it comes to AI, for example, that could be some other kind of rights that could be in question or violated? So do you suppose there would be anything that we should be aware of?


Adam Ingle: So, you’re talking about the trade-off between protecting children’s rights and maybe some other issues that might be developing. Yeah. Stephen, why? Maria?


Stephen Balkam: Pretty much, you know, I went back to 1995. I mean, we’ve been struggling with the dichotomy between safety and privacy since the beginning of the web. In other words, the more safe you are, perhaps the more you’re giving up in terms of private information. Or the more private you are, maybe you’re not as safe as you could be. So trying to find a way that balances both has been at the core, certainly, of the work of my organization, but many others, and it is extremely hard for lawmakers to get that balance right. And then if you come from the U.S., you then have this other axis, which is called free expression, which adds another layer of complexity, too, because you want people to be private, you want people and kids to be safe, but you also want one of the five rights, by the way, is the right to say what you want to be able to say. So it’s just going to be something which I don’t think will ever completely get right. And we’re going to constantly have to compromise. But I don’t think it’s beyond our ability to reach those compromises.


Adam Ingle: Just noting time, I might move on to this gentleman here.


Participant: Hi, my name is Ryan, I’m 17 years old, and I’m a youth ambassador of the OnePAL Foundation in Hong Kong. So we’re advocating for digital sustainability and access in Hong Kong. So thank you for the wonderful presentations. My question is, AI for people with learning disabilities was raised at a significant prospect of AI by children from 8 to 12 years old. So how can generative AI be further leveraged for the support and inclusion of people with disabilities? Thank you.


Adam Ingle: Thank you. And I’m just wondering, depending from your research, Fari, if you want to elaborate.


Dr. Mhairi Aitken: Yeah, it’s come out really strongly from all the work we’ve done engaging children and young people, that this is an area where they’re really excited about the potential and they want to see AI developed in ways that will support children with additional learnings, additional support needs and with disabilities. And I guess what’s important, I mean particularly in the education context, supporting children with additional learning needs, there’s huge promise here and teachers in our study recognise that, children in our study recognise that, but again I think some of the challenges or current limitations is that there’s a lot of kind of edtech tools that are being pushed and promoted that are not necessarily beginning with a sound understanding of the challenges that they’re seeking to address or a sound understanding of the needs of children with additional learning needs. I think we need to start developing these technologies from that place, you know, if we want to develop something to support children with additional learning needs, it has to be grounded in a sound understanding of what those needs are and what the challenges are. And then maybe generative AI provides tools that provides a solution, but not always, not necessarily. I think we have to start with, you know, identifying the problems and challenges and develop those tools responsibly to effectively address those challenges. That requires having expertise from teachers, from children, from specialists in these areas to guide the development of those tools and technologies. But it’s definitely an area where there’s huge promise and where it could be used really effectively and really valuably.


Adam Ingle: Thank you. Great to have a youth representative at the IGF. I mean, my gosh, I was probably playing unsafe video games when I was 17, rather than going to international forums. So incredibly impressive. Lisa, you’ve got a question from online. I do indeed. So I have a question from Grace


Co-Moderator: Thompson from CAIDP, who’s asking, how is UNICRI, thank you, and the other entities represented in the panels working with national government officials on capacity building to school principals, counseling teams, and the entire ecosystem to prepare adults in protecting our children and adolescents?


Adam Ingle: Maria, I think that’s one for you.


Maria Eira: Yeah, sure. Thank you for your question, Grace. So as I was showing before, we are developing AI literacy resources for parents. So we will try to disseminate this as much as possible. So it’s basically recommendations for parents to guide their children on the use of this technology. So this is one thing. Then we are also trying to work with governments and particularly with the judges, law enforcement, to promote AI literacy, basically. So we do a lot of capacity building to law enforcement officers worldwide to explain what is AI, how to use AI in a responsible way. So we have guidelines developed with Interpol. So this is more on the law enforcement side. And we would love to explore more to other representatives from the government and try to implement AI literacy workshops and programs in schools. So we have started this workshop in a school in the Netherlands, which was also to collect adolescents’ perspectives, but we also had a component on explaining what AI is, what are the risks, what are the benefits, and some best practices to use it in the best way. And we would love to scale up this. And we are right now in conversations with the Netherlands and with other countries to see, to understand if we can really develop a full program that can be implemented in schools. But everything is being developed. The technology is really recent. Everyone is trying to be prepared for this. And, yeah, we are still working on that.


Adam Ingle: Thanks, Maria. We’ll take one final question from the room, and then I will do a quick lightning round among the panelists. to answer what’s one research area we still need to explore to move towards child-centric AI, and what’s one thing companies can do right now to make AI more appropriate for children. Quick answers to those two questions, but please, the lady here.


Participant: Hello, my name is Elisa. I’m also from the OnePile Foundation, just like Ryan. I see a big issue in children communicating with AI about their personal issues as children are in a way more vulnerable situation and position, and AI is the bigger person in that conversation. So my question is, how can we design AI so that it doesn’t increase that power imbalance between the child and the all-known AI? I didn’t quite get the end of that question. Sorry, just repeat your question. My question is, how can we design AI that the independency of the child is increased and that there is no power imbalance between the child and the AI? You want to try that?


Dr. Mhairi Aitken: Yeah, I mean, I think in all these interactions, one thing that’s absolutely crucial is the transparency around the nature of the AI system. Transparency also around how data might be being collected through those interactions and potentially being collected by the model or used to train future model or collected by the organization, the company developing and owning those models. And if I can tie it into your question around what’s needed, because I think it is actually related, it’s that kind of critical AI literacy. We hear a lot about the importance of AI literacy and increasing understandings of AI, but what I think is really important is it’s that critical literacy. So improving understandings of not just technically how these systems work, but actually the business models behind them. how it affects children’s rights and the impact that those systems have. So I think that that’s where we need more research but also what’s needed to enable children to make informed choices about how they use those systems.


Adam Ingle: Love to hear you tie it in the answer to both questions. They’ve already saved us a lot of time. Stephen, 15 seconds. Know what she said. That’s easy. Maria, one thing we can do in research or one thing companies can do right now?


Maria Eira: Yeah, so in research we are still understanding the long-term impact of this technology. We still don’t know and the literature also reflects this. We have very contradictory results. Some papers saying that AI can improve critical thinking. Others saying that AI can actually decrease critical thinking. I think we are still in a period where we are trying to understand exactly what will be the long-term impact of this technology. And so, yeah, again, what companies should do. I think the girl in the video in the beginning said exactly everything. So the goal cannot be the profits, it must be the people. So I think if companies really focus on the children and developing these tools, targeting and having the children in mind, we can actually develop good tools for everyone.


Adam Ingle: Thanks, Maria. The goal should not be the profit, it should be the people. I think that is a great lesson coming out of this session. That’s what we have time for. Thank you so much for joining us in the room and online. And please, if you’ve got any more questions, feel free to approach Stephen and Varya or get in contact with Maria. And thank you to all the young people that engaged with this session. And thank you from the LEGO Group as well. So we’ll end it there and we’ll see you soon. Bye. Thank you. Thank you. Thank you.


O

Online Participants

Speech speed

149 words per minute

Speech length

412 words

Speech time

165 seconds

Young people view AI as advantageous when used correctly but potentially devastating when misused

Explanation

Young people recognize AI as a powerful tool that can provide significant benefits when properly utilized, but they also acknowledge its potential for causing serious harm when misapplied. They emphasize the importance of viewing AI as a tool to aid humans rather than replace them.


Evidence

Students stated ‘AI is extremely advantageous when used correctly. But when misused, it can have devastating effects on humans. That’s why we must view AI as a tool to aid us, not to replace us.’


Major discussion point

Children’s Current Use and Understanding of AI


Topics

Human rights | Sociocultural


AI should be taught in schools rather than banned, with focus on critical thinking and fact-checking skills

Explanation

Young people argue that instead of prohibiting AI use in educational settings, schools should integrate AI education that emphasizes essential skills like critical thinking, fact-checking, and responsible usage. They believe students need preparation for an AI-integrated world while learning not to become overly dependent on the technology.


Evidence

Students noted ‘Right now, students are memorising facts by adaption. while AI is outpacing that system entirely. Rather than banning AI in schools, we should teach students how to use it efficiently. Skills like fact-checking, critical thinking, and quantum engineering aren’t optional anymore, they’re essential.’


Major discussion point

Educational Impact and Equity Issues


Topics

Sociocultural | Human rights


Disagreed with

– Dr. Mhairi Aitken

Disagreed on

Approach to AI literacy and education


Privacy is a basic right, not a luxury, and AI data collection must be protected

Explanation

Young people emphasize that privacy should be considered a fundamental right rather than an optional benefit. They express concern about the valuable data that AI systems collect and the potential for this data to be misused to harm the very people it’s supposed to help.


Evidence

Students stated ‘Privacy is not a luxury, it’s a basic right. The data that AI collects is valuable, and if it’s not protected, it can be used to hurt the very people it’s supposed to help.’


Major discussion point

Privacy, Data Protection and Transparency


Topics

Human rights | Legal and regulatory


Agreed with

– Stephen Balkam
– Dr. Mhairi Aitken

Agreed on

Data transparency and privacy protection are fundamental concerns for AI systems used by children


AI training requires massive resources including thousands of liters of water and extensive GPU usage

Explanation

Young people demonstrate awareness of the significant environmental costs associated with training AI models. They highlight the substantial resource consumption required for AI development, including water usage and computational power.


Evidence

Students noted ‘LLMs include thousands of litres of water during training, and GPT-3 require over 10,000 GPUs over 15 days. Hundreds of LLMs are being developed, and their environmental impact is immense.’


Major discussion point

Environmental and Ethical Concerns


Topics

Development | Sociocultural


Agreed with

– Dr. Mhairi Aitken

Agreed on

Environmental impacts of AI are significant concerns that influence children’s usage decisions


Young people must be part of AI conversations as they are affected now, not just in the future

Explanation

Young people assert their right to participate in current AI discussions and decision-making processes. They reject the notion that they are only stakeholders for the future, emphasizing that AI impacts their lives today and their voices should matter in shaping the technology.


Evidence

Students stated ‘Young people like me must be part of this conversation. We aren’t just the future, we’re here now. Our voices, our experiences, and our hopes must matter in shaping this technology.’


Major discussion point

Youth Participation and Rights


Topics

Human rights | Sociocultural


Agreed with

– Dr. Mhairi Aitken
– Adam Ingle

Agreed on

Children must be meaningfully included in AI decision-making processes


Adults should listen to children more because they have valuable ideas about AI development

Explanation

Young people advocate for greater inclusion of children’s perspectives in AI development discussions. They believe that children possess valuable insights and ideas that should be considered alongside adult viewpoints when making decisions about AI technology.


Evidence

Students said ‘I think adults should listen to children more because children have lots of good ideas, as well as adults, with AI.’


Major discussion point

Youth Participation and Rights


Topics

Human rights | Sociocultural


D

Dr. Mhairi Aitken

Speech speed

196 words per minute

Speech length

2780 words

Speech time

847 seconds

Around 22% of children aged 8-12 report using generative AI, with three out of five teachers using it in their work

Explanation

Research findings show that a significant portion of young children are already engaging with generative AI technologies, while the majority of teachers are incorporating these tools into their professional practice. This indicates widespread adoption across educational settings.


Evidence

National survey of around 800 children between ages 8-12, their parents and carers, and 1000 teachers across the UK


Major discussion point

Children’s Current Use and Understanding of AI


Topics

Sociocultural | Human rights


Children are the group most impacted by AI advances but least represented in decision-making about AI development

Explanation

There is a fundamental disconnect between who is most affected by AI technology and who has input into its development. Children, despite being the demographic that will experience the greatest long-term impact from AI advances, have minimal representation in the decision-making processes that shape these technologies.


Evidence

Four years of research projects at the Alan Turing Institute’s children and AI team, including collaborations with UNICEF, Council of Europe, and Scottish Airlines and Children’s Parliament


Major discussion point

Youth Participation and Rights


Topics

Human rights | Sociocultural


Agreed with

– Online Participants
– Adam Ingle

Agreed on

Children must be meaningfully included in AI decision-making processes


Stark differences exist between AI use in private schools versus state-funded schools, pointing to equity issues

Explanation

Research reveals significant disparities in AI access and education between different types of schools. Children in private schools are much more likely to use generative AI and have better understanding of these technologies, creating potential inequalities in access to AI benefits.


Evidence

UK-based research showing children in private schools much more likely to both use generative AI and report having information and understanding about generative AI


Major discussion point

Educational Impact and Equity Issues


Topics

Development | Human rights | Sociocultural


The burden should be on developers and policymakers to make systems safe rather than expecting children to police their interactions

Explanation

Rather than placing responsibility on children to navigate AI systems safely, the primary obligation should rest with those who create and regulate these technologies. Children interact with AI systems differently than adults and often in ways not anticipated by developers.


Evidence

Recognition that children interact with AI systems differently from adults and often differently from how designers or developers anticipate those tools might be used


Major discussion point

AI Design and Child Safety Concerns


Topics

Human rights | Legal and regulatory


Agreed with

– Stephen Balkam
– Adam Ingle

Agreed on

AI systems are not designed with children in mind and require child-centric development from the start


Children with additional learning needs show particular interest in using AI for communication and support

Explanation

Research indicates that children with additional support needs or learning disabilities are more likely to utilize generative AI for communication purposes and connection. There is significant interest from both children and teachers in leveraging AI to support children with additional learning needs.


Evidence

Survey findings showing children with additional learning needs more likely to report using generative AI for communication and connection, plus teacher interest in using AI to support these children


Major discussion point

Educational Impact and Equity Issues


Topics

Human rights | Sociocultural


Agreed with

– Online Participants

Agreed on

AI has significant potential to support children with additional learning needs and disabilities


AI models consistently produce biased outputs, predominantly showing white and male figures

Explanation

When children used generative AI tools to create images of people, the systems defaulted to producing images of white, predominantly male individuals. This consistent bias in AI outputs was identified and caused concern among the children using these tools.


Evidence

Six full-day workshops in Scottish schools using OpenAI’s ChatGPT and DALL-E, where each time children wanted an image of a person, it would by default create an image of a person that was white and predominantly male


Major discussion point

Bias and Representation Issues


Topics

Human rights | Sociocultural


Children of color become upset and choose not to use AI when they don’t feel represented in outputs

Explanation

When AI systems fail to represent children of color in their outputs, these children experience emotional distress and subsequently choose to avoid using the technology. This lack of representation not only impacts individual children but also affects broader adoption patterns of AI tools.


Evidence

Observations from workshops showing children of colour becoming very upset when not represented, and in many cases choosing not to use generative AI in the future


Major discussion point

Bias and Representation Issues


Topics

Human rights | Sociocultural


Children who learn about environmental impacts of AI models often choose not to use them

Explanation

When children gain awareness of the environmental costs associated with generative AI models, including water consumption and carbon footprint, they frequently make the conscious decision to avoid using these technologies. This pattern has been consistent across multiple research engagements with children and young people.


Evidence

Consistent findings across all work engaging children and young people, where children with awareness of environmental impacts, particularly water consumption and carbon footprint of generative AI models, chose not to use those models


Major discussion point

Environmental and Ethical Concerns


Topics

Development | Human rights


Agreed with

– Online Participants

Agreed on

Environmental impacts of AI are significant concerns that influence children’s usage decisions


AI companions marketed to children raise concerns about dependence and isolation from real community

Explanation

The growing market of AI companions specifically targeted at children presents risks of creating unhealthy dependencies and potentially exacerbating social isolation. While these tools are often marketed as solutions to loneliness, they may actually increase disconnection from real human relationships and community engagement.


Evidence

Growing research on AI companions marketed as addressing challenges of loneliness but potentially creating dependence or connection outside of society and community


Major discussion point

AI Companions and Emotional Attachment


Topics

Human rights | Sociocultural


Transparency about AI system nature and data collection is crucial for child interactions

Explanation

For children to safely interact with AI systems, it is essential that they understand what they are interacting with and how their data might be collected or used. This transparency should include information about the AI system’s capabilities, limitations, and data practices.


Major discussion point

Privacy, Data Protection and Transparency


Topics

Human rights | Legal and regulatory


Agreed with

– Online Participants
– Stephen Balkam

Agreed on

Data transparency and privacy protection are fundamental concerns for AI systems used by children


Critical AI literacy focusing on business models and rights impacts is needed beyond technical understanding

Explanation

While technical AI literacy is important, children need a deeper understanding that includes the business models behind AI systems and how these technologies affect their rights. This critical approach goes beyond just understanding how AI works to understanding why it works the way it does and who benefits.


Major discussion point

Research Gaps and Future Needs


Topics

Human rights | Sociocultural


Disagreed with

– Online Participants

Disagreed on

Approach to AI literacy and education


More research is needed on AI’s role in supporting children with disabilities while ensuring proper understanding of their needs

Explanation

While there is significant promise for AI to support children with additional learning needs and disabilities, current development often lacks proper understanding of the specific challenges and needs these technologies should address. Research and development must be grounded in expertise from teachers, children, and specialists in these areas.


Evidence

Recognition that many edtech tools are being pushed without sound understanding of challenges they seek to address or needs of children with additional learning needs


Major discussion point

Research Gaps and Future Needs


Topics

Human rights | Development | Sociocultural


Agreed with

– Online Participants

Agreed on

AI has significant potential to support children with additional learning needs and disabilities


Designing AI well for children benefits other vulnerable users and wider user groups

Explanation

When AI systems are properly designed with children’s needs and rights in mind, the benefits extend beyond just children to other vulnerable populations and the general user base. Child-centric design principles create better, more inclusive AI systems overall.


Major discussion point

Regulatory and Policy Approaches


Topics

Human rights | Legal and regulatory


M

Maria Eira

Speech speed

134 words per minute

Speech length

1688 words

Speech time

750 seconds

Parents who regularly use generative AI feel more positive about its impact on their children’s development

Explanation

Research shows a clear correlation between parents’ familiarity with generative AI technology and their attitudes toward its impact on their children. Parents who use AI regularly view it more positively across multiple areas including critical thinking, career development, and social work, while unfamiliar parents tend to be negative about AI’s impact.


Evidence

Worldwide survey from 19 countries showing regular users (yellow bars) feel much more positive about AI’s impact on critical thinking, career, social work, and general child development compared to unfamiliar parents (blue bars) who were negative in all fields


Major discussion point

Children’s Current Use and Understanding of AI


Topics

Human rights | Sociocultural


There is a lack of awareness from parents and low communication between parents and children about AI use

Explanation

Research reveals significant gaps in parental understanding of how their adolescent children use generative AI, particularly for personal purposes. While parents are aware of academic uses, they often don’t know or disagree that their children use AI for more personal matters like companionship or health advice.


Evidence

Survey targeting parents of adolescents aged 13-17 showing over 80% of parents aware of AI use for information search and school assignments, but for personal uses like AI companions or health advice, most popular responses were ‘I disagree’ or ‘I don’t know’


Major discussion point

AI Design and Child Safety Concerns


Topics

Human rights | Sociocultural


Company goals should focus on people rather than profits when developing AI tools for children

Explanation

When developing AI technologies for children, companies should prioritize human welfare and child wellbeing over financial gains. This principle emphasizes the need for ethical development practices that put children’s needs and safety first.


Evidence

Reference to student comment from opening video: ‘the goal cannot be the profits, it must be the people’


Major discussion point

Environmental and Ethical Concerns


Topics

Human rights | Economic


Long-term impacts of AI technology on children remain unclear with contradictory research results

Explanation

Current research on AI’s effects on children shows conflicting findings, making it difficult to draw definitive conclusions about long-term impacts. Some studies suggest AI can improve critical thinking while others indicate it may decrease these skills, highlighting the need for more comprehensive research.


Evidence

Literature review showing contradictory results with some papers saying AI can improve critical thinking while others say AI can decrease critical thinking


Major discussion point

Research Gaps and Future Needs


Topics

Human rights | Sociocultural


Children should have separate AI legislation because they cannot give the same consent as adults

Explanation

Children require distinct legal protections regarding AI because they lack the same capacity for informed consent as adults. Several principles applicable to adults cannot be directly applied to children, necessitating specialized legislation that considers children’s unique vulnerabilities and developmental needs.


Evidence

Recognition that children don’t have the same awareness of consent and several principles cannot be fully applicable from adults to children


Major discussion point

Regulatory and Policy Approaches


Topics

Human rights | Legal and regulatory


S

Stephen Balkam

Speech speed

139 words per minute

Speech length

2192 words

Speech time

941 seconds

Teens thought their parents knew more about generative AI than they did, contrary to previous technology trends

Explanation

Unlike previous technological developments where children typically led adoption, research found that teenagers believed their parents had better understanding of generative AI. This reversal occurred because many parents were learning AI tools for work purposes or to stay relevant in their careers.


Evidence

2023 three-country study (US, Germany, Japan) with parents and teens, showing large, sizable share of teens in all three countries recorded that their parents had better understanding, with parents struggling to use gen AI at work


Major discussion point

Children’s Current Use and Understanding of AI


Topics

Sociocultural | Human rights


AI systems are not designed with children in mind, requiring retrofitting for safety like previous web technologies

Explanation

The development of AI technology is repeating the same pattern as previous internet technologies, where systems are created without considering children’s needs and safety, then require after-the-fact modifications. This pattern occurred with Web 1.0 in the mid-90s and Web 2.0 around 2005-2006, and is now happening again with AI.


Evidence

Historical examples of World Wide Web not designed with kids in mind requiring retrofitted parental controls, and social media sites like Myspace and Facebook expanding from colleges to elementary schools without child-focused design


Major discussion point

AI Design and Child Safety Concerns


Topics

Human rights | Cybersecurity


Agreed with

– Dr. Mhairi Aitken
– Adam Ingle

Agreed on

AI systems are not designed with children in mind and require child-centric development from the start


Students are increasingly using Gen AI to do their work rather than just proofread it, potentially impacting critical thinking development

Explanation

There has been a concerning shift in how students use generative AI, moving from using it as a tool for proofreading and summarizing to having it complete entire assignments. This trend raises concerns about students not developing essential critical thinking skills.


Evidence

Comparison between initial study findings where teens used AI for ‘proofreading and summarizing long texts’ versus current observations of ‘teens and young people increasingly using Gen AI to do their work for them, their essays, their homework’


Major discussion point

Educational Impact and Equity Issues


Topics

Sociocultural | Human rights


Data transparency is top priority for parents and teens regarding AI companies

Explanation

Research shows that both parents and teenagers prioritize understanding how AI companies collect, use, and source their data. They want companies to be more forthcoming about data practices and to provide clear explanations about how AI systems work and whether the information can be trusted.


Evidence

Survey results showing ‘transparency of data practices’ as top of list for what parents and teens want to learn, and ‘steps to reveal what’s behind Gen AI and how data is sourced and whether it can be trusted’ as key element


Major discussion point

Privacy, Data Protection and Transparency


Topics

Human rights | Legal and regulatory


Agreed with

– Online Participants
– Dr. Mhairi Aitken

Agreed on

Data transparency and privacy protection are fundamental concerns for AI systems used by children


There’s an ongoing struggle to balance safety and privacy, with more safety potentially requiring less privacy

Explanation

The relationship between online safety and privacy creates a persistent dilemma where increasing one often means decreasing the other. This challenge has existed since the beginning of the web and becomes more complex when adding considerations like free expression rights.


Evidence

Reference to struggling with ‘the dichotomy between safety and privacy since the beginning of the web’ since 1995, with additional complexity from free expression rights in the US context


Major discussion point

Privacy, Data Protection and Transparency


Topics

Human rights | Legal and regulatory


Disagreed with

– Joon Baek

Disagreed on

Balance between safety and privacy in AI regulation


Young children are forming emotional attachments to chatbots and using AI for therapy-like conversations

Explanation

There is growing concern about children, teens, and young adults developing emotional dependencies on AI chatbots, using them for extended therapeutic conversations. While these interactions can feel positive and self-reinforcing, they lack the human elements essential for proper mental health support.


Evidence

Anecdotal observations of ‘kids, teens, young adults and adults using AI for therapy, literally talking through on hours at a time deep emotional issues’ with responses from ChatGPT and others


Major discussion point

AI Companions and Emotional Attachment


Topics

Human rights | Sociocultural


L

Leanda Barrington‑Leach

Speech speed

173 words per minute

Speech length

181 words

Speech time

62 seconds

There are existing regulatory and technical tools like the Children and AI Design Code to implement child-centric AI development

Explanation

Regulatory and technical solutions already exist to address the need for child-focused AI development. The Children and AI Design Code represents a collaborative effort between AI experts, children’s rights experts, and other stakeholders to create practical protocols for innovation that prioritizes children’s rights.


Evidence

Reference to the Children and AI Design Code as work that ‘brought AI experts and children’s rights experts and many others together over a very long period of time to develop a technical protocol for innovation that puts children’s rights at the center’


Major discussion point

Regulatory and Policy Approaches


Topics

Human rights | Legal and regulatory


A

Adam Ingle

Speech speed

169 words per minute

Speech length

1180 words

Speech time

418 seconds

The workshop aims to elevate children’s voices in AI design without being patronizing to their views

Explanation

The session is specifically designed to ensure children are part of decision-making processes regarding AI development. The approach emphasizes treating young people’s perspectives with respect and incorporating their real ideas about the future of AI rather than dismissing them as less valuable than adult opinions.


Evidence

Workshop called ‘Elevating Children’s Voices in AI Design’ with participation from young people sharing experiences and hopes, including video messages and panel participation


Major discussion point

Youth Participation and Rights


Topics

Human rights | Sociocultural


Agreed with

– Online Participants
– Dr. Mhairi Aitken

Agreed on

Children must be meaningfully included in AI decision-making processes


Children are already using AI and the question is whether children are ready for AI or AI is ready for children

Explanation

This fundamental question addresses the current reality that children are actively engaging with AI technologies across multiple contexts and purposes. The framing suggests examining whether the responsibility lies with preparing children for AI or ensuring AI systems are appropriately designed for children.


Evidence

Research findings showing kids are already using AI across multiple different contexts for multiple different purposes


Major discussion point

AI Design and Child Safety Concerns


Topics

Human rights | Sociocultural


Agreed with

– Dr. Mhairi Aitken
– Stephen Balkam

Agreed on

AI systems are not designed with children in mind and require child-centric development from the start


M

Mariana Rozo‑Paz

Speech speed

161 words per minute

Speech length

323 words

Speech time

119 seconds

AI agents as influencers are directly affecting children’s real-life relationships and experiences

Explanation

The emergence of AI agents functioning as influencers presents new challenges beyond traditional human influencers or children becoming influencers themselves. These AI agents are not just affecting children’s digital lives but are having concrete impacts on their real-world relationships and social interactions.


Evidence

DataSphere Initiative youth project research focusing on influencers, including AI agents as influencers in digital spaces affecting children’s concrete lives and relationships


Major discussion point

AI Companions and Emotional Attachment


Topics

Human rights | Sociocultural


There are concerning trends in children being turned into influencers by their parents with mind-blowing statistics

Explanation

Research reveals troubling patterns where parents are converting their children into influencers, raising ethical concerns about consent, exploitation, and the commercialization of childhood. The scale of this phenomenon appears to be significant based on emerging data.


Evidence

DataSphere Initiative research on children being turned into influencers by parents with ‘mind-blowing stats’


Major discussion point

Youth Participation and Rights


Topics

Human rights | Economic


J

Joon Baek

Speech speed

173 words per minute

Speech length

124 words

Speech time

42 seconds

Privacy protection laws aimed at safeguarding children may inadvertently violate other rights

Explanation

There is concern that legislation designed to protect children’s data and ensure their online safety might create unintended consequences that compromise other fundamental rights. This highlights the complex balance required when creating protective measures for children in the AI context.


Evidence

Experience from Youth for Privacy NGO observing privacy issues in legislation aimed at protecting children’s data and safeguarding children online


Major discussion point

Privacy, Data Protection and Transparency


Topics

Human rights | Legal and regulatory


Disagreed with

– Stephen Balkam

Disagreed on

Balance between safety and privacy in AI regulation


P

Participant

Speech speed

150 words per minute

Speech length

203 words

Speech time

80 seconds

AI creates a power imbalance between children and AI systems that needs to be addressed through design

Explanation

Children are in a vulnerable position when communicating with AI about personal issues, as the AI appears to be the ‘bigger person’ or authority in the conversation. Design approaches should focus on increasing children’s independence and reducing this inherent power imbalance rather than reinforcing it.


Evidence

Recognition that children are in a more vulnerable situation and position when AI is the bigger person in conversations about personal issues


Major discussion point

AI Design and Child Safety Concerns


Topics

Human rights | Sociocultural


C

Co-Moderator

Speech speed

127 words per minute

Speech length

107 words

Speech time

50 seconds

There should be separate AI ethics and legislation specifically targeting children rather than applying general frameworks

Explanation

The question of whether AI ethics for children should be distinct from general AI ethics reflects recognition that children have unique needs, vulnerabilities, and rights that may not be adequately addressed by general AI governance frameworks. This suggests the need for specialized approaches to AI regulation and policy for children.


Evidence

Question from law student studying AI law specifically about separating children’s AI ethics from general AI ethics and state-level legislation for AI systems targeting children


Major discussion point

Regulatory and Policy Approaches


Topics

Human rights | Legal and regulatory


Agreements

Agreement points

AI systems are not designed with children in mind and require child-centric development from the start

Speakers

– Dr. Mhairi Aitken
– Stephen Balkam
– Adam Ingle

Arguments

The burden should be on developers and policymakers to make systems safe rather than expecting children to police their interactions


AI systems are not designed with children in mind, requiring retrofitting for safety like previous web technologies


Children are already using AI and the question is whether children are ready for AI or AI is ready for children


Summary

All speakers agree that current AI systems are developed without considering children’s needs and safety, repeating historical patterns from previous web technologies. They emphasize that responsibility should lie with developers and policymakers rather than children themselves.


Topics

Human rights | Legal and regulatory


Children must be meaningfully included in AI decision-making processes

Speakers

– Online Participants
– Dr. Mhairi Aitken
– Adam Ingle

Arguments

Young people must be part of AI conversations as they are affected now, not just in the future


Children are the group most impacted by AI advances but least represented in decision-making about AI development


The workshop aims to elevate children’s voices in AI design without being patronizing to their views


Summary

There is strong consensus that children should have meaningful participation in AI governance and development decisions, as they are currently affected by these technologies and have valuable perspectives to contribute.


Topics

Human rights | Sociocultural


Data transparency and privacy protection are fundamental concerns for AI systems used by children

Speakers

– Online Participants
– Stephen Balkam
– Dr. Mhairi Aitken

Arguments

Privacy is a basic right, not a luxury, and AI data collection must be protected


Data transparency is top priority for parents and teens regarding AI companies


Transparency about AI system nature and data collection is crucial for child interactions


Summary

All speakers emphasize that transparency about data practices and privacy protection are essential requirements for AI systems that children use, viewing privacy as a fundamental right rather than optional feature.


Topics

Human rights | Legal and regulatory


AI has significant potential to support children with additional learning needs and disabilities

Speakers

– Dr. Mhairi Aitken
– Online Participants

Arguments

Children with additional learning needs show particular interest in using AI for communication and support


More research is needed on AI’s role in supporting children with disabilities while ensuring proper understanding of their needs


Summary

There is agreement that AI shows promise for supporting children with additional learning needs, though this must be developed with proper understanding of their specific requirements and challenges.


Topics

Human rights | Sociocultural


Environmental impacts of AI are significant concerns that influence children’s usage decisions

Speakers

– Online Participants
– Dr. Mhairi Aitken

Arguments

AI training requires massive resources including thousands of liters of water and extensive GPU usage


Children who learn about environmental impacts of AI models often choose not to use them


Summary

Both young people and researchers recognize the substantial environmental costs of AI development and note that awareness of these impacts influences children’s decisions about using AI technologies.


Topics

Development | Human rights


Similar viewpoints

Both speakers advocate for distinct legal and ethical frameworks for children’s AI use, recognizing that children have unique vulnerabilities and cannot provide the same informed consent as adults.

Speakers

– Maria Eira
– Co-Moderator

Arguments

Children should have separate AI legislation because they cannot give the same consent as adults


There should be separate AI ethics and legislation specifically targeting children rather than applying general frameworks


Topics

Human rights | Legal and regulatory


Both experts express concern about children developing unhealthy emotional dependencies on AI systems, particularly AI companions and chatbots used for personal or therapeutic purposes.

Speakers

– Stephen Balkam
– Dr. Mhairi Aitken

Arguments

Young children are forming emotional attachments to chatbots and using AI for therapy-like conversations


AI companions marketed to children raise concerns about dependence and isolation from real community


Topics

Human rights | Sociocultural


Both researchers emphasize the need for deeper understanding of AI’s impacts on children, going beyond technical literacy to include critical analysis of business models and rights implications.

Speakers

– Dr. Mhairi Aitken
– Maria Eira

Arguments

Critical AI literacy focusing on business models and rights impacts is needed beyond technical understanding


Long-term impacts of AI technology on children remain unclear with contradictory research results


Topics

Human rights | Sociocultural


Unexpected consensus

Parents’ superior knowledge of AI compared to children

Speakers

– Stephen Balkam
– Maria Eira

Arguments

Teens thought their parents knew more about generative AI than they did, contrary to previous technology trends


Parents who regularly use generative AI feel more positive about its impact on their children’s development


Explanation

This finding is unexpected because historically children have led technology adoption. The reversal occurred because parents were learning AI for work purposes, creating an unusual dynamic where parents had more AI knowledge than their children for the first time in digital technology evolution.


Topics

Sociocultural | Human rights


Children’s preference for traditional materials over AI tools in creative activities

Speakers

– Dr. Mhairi Aitken

Arguments

Children chose to use traditional tactile hands-on art materials over generative AI tools, feeling that ‘art is actually real’ while ‘AI art because the computer did it, not them’


Explanation

Despite children’s general interest in AI, when given the choice between AI and traditional creative tools, they overwhelmingly chose traditional methods. This unexpected preference reveals important insights about children’s values regarding authenticity and personal agency in creative expression.


Topics

Human rights | Sociocultural


Equity concerns creating barriers to AI adoption in education

Speakers

– Dr. Mhairi Aitken

Arguments

Stark differences exist between AI use in private schools versus state-funded schools, pointing to equity issues


Explanation

The emergence of AI creating new forms of educational inequality was unexpected, as it suggests that AI could exacerbate existing disparities rather than democratize access to educational tools. This finding highlights how technological advancement can inadvertently increase rather than reduce educational inequities.


Topics

Development | Human rights | Sociocultural


Overall assessment

Summary

There is strong consensus among speakers on fundamental principles: AI systems need child-centric design from the start, children must be included in AI governance decisions, privacy and transparency are essential rights, and AI shows promise for supporting children with additional needs while requiring careful attention to environmental impacts and bias issues.


Consensus level

High level of consensus on core principles with implications for urgent need for coordinated action across policy, industry, and research domains. The agreement suggests a clear path forward requiring collaboration between technologists, policymakers, educators, and children themselves to ensure AI development serves children’s best interests and rights.


Differences

Different viewpoints

Balance between safety and privacy in AI regulation

Speakers

– Stephen Balkam
– Joon Baek

Arguments

There’s an ongoing struggle to balance safety and privacy, with more safety potentially requiring less privacy


Privacy protection laws aimed at safeguarding children may inadvertently violate other rights


Summary

Stephen Balkam presents this as an inevitable trade-off that requires compromise, while Joon Baek raises concerns about unintended rights violations from protective measures, suggesting a more cautious approach to safety-focused legislation


Topics

Human rights | Legal and regulatory


Approach to AI literacy and education

Speakers

– Online Participants
– Dr. Mhairi Aitken

Arguments

AI should be taught in schools rather than banned, with focus on critical thinking and fact-checking skills


Critical AI literacy focusing on business models and rights impacts is needed beyond technical understanding


Summary

Young people emphasize practical skills like fact-checking and efficient AI use in schools, while Dr. Aitken advocates for deeper critical literacy that includes understanding business models and rights impacts


Topics

Human rights | Sociocultural


Unexpected differences

Children’s preference for traditional materials over AI tools

Speakers

– Online Participants
– Dr. Mhairi Aitken

Arguments

AI should be taught in schools rather than banned, with focus on critical thinking and fact-checking skills


Children who learn about environmental impacts of AI models often choose not to use them


Explanation

While young people in the video advocated for AI integration in education, research findings showed children often chose traditional tactile materials over AI tools and avoided AI when learning about environmental impacts. This reveals a gap between advocacy for AI education and actual usage preferences


Topics

Human rights | Sociocultural | Development


Overall assessment

Summary

The discussion showed remarkable consensus on core principles – that children need protection, representation, and age-appropriate AI design – but revealed nuanced differences in implementation approaches and priorities


Disagreement level

Low to moderate disagreement level with high consensus on fundamental goals. The main tensions were methodological rather than philosophical, focusing on how to achieve shared objectives rather than disagreeing on the objectives themselves. This suggests a mature field where stakeholders agree on problems but are still developing optimal solutions


Partial agreements

Partial agreements

Similar viewpoints

Both speakers advocate for distinct legal and ethical frameworks for children’s AI use, recognizing that children have unique vulnerabilities and cannot provide the same informed consent as adults.

Speakers

– Maria Eira
– Co-Moderator

Arguments

Children should have separate AI legislation because they cannot give the same consent as adults


There should be separate AI ethics and legislation specifically targeting children rather than applying general frameworks


Topics

Human rights | Legal and regulatory


Both experts express concern about children developing unhealthy emotional dependencies on AI systems, particularly AI companions and chatbots used for personal or therapeutic purposes.

Speakers

– Stephen Balkam
– Dr. Mhairi Aitken

Arguments

Young children are forming emotional attachments to chatbots and using AI for therapy-like conversations


AI companions marketed to children raise concerns about dependence and isolation from real community


Topics

Human rights | Sociocultural


Both researchers emphasize the need for deeper understanding of AI’s impacts on children, going beyond technical literacy to include critical analysis of business models and rights implications.

Speakers

– Dr. Mhairi Aitken
– Maria Eira

Arguments

Critical AI literacy focusing on business models and rights impacts is needed beyond technical understanding


Long-term impacts of AI technology on children remain unclear with contradictory research results


Topics

Human rights | Sociocultural


Takeaways

Key takeaways

Children are already using AI extensively (22% of 8-12 year olds) but AI systems are not designed with children in mind, requiring urgent action to prioritize child-centric development


There is a significant communication gap between parents and children about AI use, particularly for personal applications, with parents who use AI themselves being more positive about its impact


AI literacy education focusing on critical thinking, fact-checking, and understanding business models behind AI systems is essential and should be integrated into schools rather than banning AI


Significant equity issues exist in AI access and education, with stark differences between private and state-funded schools creating potential digital divides


Children show strong concerns about bias and representation in AI outputs, environmental impacts, and inappropriate content, often choosing not to use AI when these issues are present


AI shows particular promise for supporting children with additional learning needs, but development must be grounded in understanding actual needs rather than pushing technology solutions


The burden of ensuring AI safety should be on developers, policymakers, and regulators rather than expecting children to police their own interactions


Children have a fundamental right to participate in AI decision-making processes that affect their lives, as they are the most impacted group but least represented in development decisions


Resolutions and action items

UNICRI and Disney are launching AI literacy resources (3D animation movie for adolescents and parent guide) at the AI for Good Summit in two weeks


Technology companies should provide transparent explanations of AI decision-making, algorithm recommendations, and system limitations


Industry should fund research and programs to help children develop AI literacy and content discernment skills


AI tools should be designed with children in mind from the start, not as an afterthought, learning from past mistakes with web technologies


Companies should focus on people rather than profits when developing AI tools for children


Separate legislation specifically targeting children’s AI rights and protections should be developed, recognizing children’s unique consent and awareness limitations


Unresolved issues

Long-term impacts of AI technology on children remain unclear with contradictory research results on effects like critical thinking development


How to effectively balance safety and privacy rights in AI systems for children without compromising either


Addressing the environmental impact of AI models and providing transparent information about resource consumption to users


Developing age-appropriate AI companions that support mental health without creating dependency or isolation from real communities


Scaling AI literacy programs globally and implementing them effectively in school systems across different countries


Addressing the power imbalance between children and AI systems in personal conversations and interactions


How to ensure AI systems designed for children with disabilities are grounded in actual needs rather than technology-first approaches


Preventing the monetization and exploitation of children through AI-powered influencer marketing and family vlogging


Suggested compromises

Accepting that perfect balance between safety, privacy, and free expression may never be achieved, requiring constant compromise and adjustment


Designing AI systems well for children will benefit other vulnerable users and wider user groups, creating broader positive impact


Starting with problem identification and user needs assessment before applying AI solutions, rather than technology-first approaches


Combining transparency about AI system nature and data collection with critical AI literacy education to enable informed choices


Developing AI literacy resources that target both children and parents simultaneously to improve communication and understanding


Thought provoking comments

AI is extremely advantageous when used correctly. But when misused, it can have devastating effects on humans… Young people like me must be part of this conversation. We aren’t just the future, we’re here now.

Speaker

Online Participants (Young people from across the UK)


Reason

This opening statement immediately established the central tension of the discussion – AI as both opportunity and threat – while assertively claiming young people’s right to participate in decision-making. The phrase ‘we aren’t just the future, we’re here now’ powerfully challenges the common dismissal of children’s voices as merely preparatory for future relevance.


Impact

This comment set the entire tone for the workshop, establishing children as active stakeholders rather than passive subjects of protection. It influenced all subsequent speakers to frame their research and recommendations around meaningful youth participation rather than paternalistic approaches.


teens thought that their parents knew more about generative AI than they did. With previous trends, particularly in the early days of the web, and then web 2.0, and social media, kids were always way ahead of their parents in terms of the technology. But in this case, a large, sizable share of teens in all three countries recorded that their parents had a better understanding than they did.

Speaker

Stephen Balkam


Reason

This finding fundamentally challenges the conventional wisdom about digital natives and technology adoption patterns. It suggests a significant shift in how AI technologies are being introduced and adopted, with workplace necessity driving adult adoption ahead of youth exploration.


Impact

This observation reframed the entire discussion about digital literacy and family dynamics around AI. It led to deeper exploration of how AI literacy should be approached differently from previous technology rollouts and influenced subsequent speakers to consider intergenerational learning approaches.


art is actually real… children felt that they couldn’t say that about AI art because the computer did it, not them.

Speaker

Dr. Mhairi Aitken (quoting children from her research)


Reason

This insight reveals children’s sophisticated understanding of authenticity, creativity, and personal agency in relation to AI. It challenges assumptions that children will automatically embrace AI tools and shows their nuanced thinking about what constitutes genuine creative expression.


Impact

This comment shifted the discussion from focusing on AI capabilities to considering children’s values and choices. It introduced the important concept that technology adoption isn’t just about functionality but about meaning and identity, influencing how other panelists discussed the importance of providing alternatives and respecting children’s preferences.


parents who use generative AI tools feel more positive about the impact that this technology can have on their children’s development… when parents are familiar with the technology, when they use the technology, they see it differently.

Speaker

Maria Eira


Reason

This finding reveals a crucial insight about how personal experience with technology shapes attitudes toward children’s use of that technology. It suggests that fear and resistance may stem from unfamiliarity rather than inherent dangers, pointing toward education as a key intervention.


Impact

This observation led to discussion about the importance of adult AI literacy as a prerequisite for supporting children’s safe AI use. It influenced the conversation toward considering family-based approaches to AI education rather than child-focused interventions alone.


children are likely to be the group who will be most impacted by advances in AI technologies, but they’re simultaneously the group that are least represented in decision-making about the ways that those technologies are designed, developed, and deployed

Speaker

Dr. Mhairi Aitken


Reason

This statement crystallizes the fundamental injustice at the heart of current AI development – those most affected have the least voice. It frames the entire discussion in terms of rights and representation rather than just safety or education.


Impact

This comment elevated the discussion from technical considerations to fundamental questions of democracy and rights. It influenced subsequent speakers to consider not just how to protect children from AI, but how to include them in shaping AI’s development.


the goal cannot be the profits, it must be the people

Speaker

Maria Eira (quoting from the children’s video)


Reason

This simple but profound statement cuts to the heart of the tension between commercial AI development and human welfare. Coming from children themselves, it carries particular moral weight and clarity about priorities.


Impact

This comment served as a powerful conclusion that tied together many threads of the discussion. It reinforced the moral imperative for child-centered AI development and provided a clear principle for evaluating AI initiatives.


Should AI ethics for children be separated from general AI ethics?

Speaker

Katarina (online participant studying AI law)


Reason

This question forced the panel to articulate whether children’s needs are fundamentally different from adults’ or simply a subset of universal human needs. It challenged the assumption that child-specific approaches are necessary while opening space to consider the broader implications of child-centered design.


Impact

This question prompted important clarification from panelists about why children need specific consideration while also acknowledging that good design for children benefits everyone. It helped crystallize the argument for child-specific approaches while avoiding segregation of children’s interests from broader human rights.


Overall assessment

These key comments fundamentally shaped the discussion by establishing children as active stakeholders rather than passive subjects, challenging conventional assumptions about technology adoption and digital literacy, and elevating the conversation from technical considerations to questions of rights, representation, and values. The opening statement from young people set a tone of empowerment that influenced all subsequent speakers to frame their research in terms of meaningful participation rather than protection. The research findings about reversed technology adoption patterns and children’s sophisticated value judgments about authenticity added nuance and complexity to common assumptions. The discussion evolved from a focus on safety and education to encompass broader questions of democracy, representation, and the fundamental purposes of AI development. The interplay between research findings and direct youth voices created a rich dialogue that moved beyond typical adult-centric approaches to technology policy.


Follow-up questions

How are influencers (including AI agents as influencers) shaping children’s experiences with AI and social media, and how does this affect their real-life relationships?

Speaker

Mariana Rozo-Paz from DataSphere Initiative


Explanation

This addresses a gap in current research about the influence of AI agents and human influencers on children’s digital experiences and their concrete impact on real-world relationships


What are the long-term impacts of generative AI use on children’s development and well-being?

Speaker

Maria Eira


Explanation

Current research shows contradictory results about whether AI improves or decreases critical thinking skills, indicating need for longitudinal studies


How can AI companions be designed responsibly to support children’s mental health without creating dependency or exacerbating loneliness?

Speaker

Dr. Mhairi Aitken


Explanation

There’s growing interest from children in using AI companions for mental health support, but current tools aren’t designed with children’s well-being in mind


How can generative AI be further leveraged for the support and inclusion of people with disabilities?

Speaker

Ryan (17-year-old youth ambassador)


Explanation

Children showed strong interest in AI supporting those with additional learning needs, but development needs to be grounded in understanding actual needs and challenges


How can AI be designed to reduce power imbalances between children and AI systems, particularly in personal conversations?

Speaker

Elisa from OnePile Foundation


Explanation

Children are in vulnerable positions when communicating with AI about personal issues, requiring design approaches that maintain child agency and independence


How can we develop critical AI literacy that goes beyond technical understanding to include business models and rights impacts?

Speaker

Dr. Mhairi Aitken


Explanation

Current AI literacy efforts focus on technical aspects, but children need to understand the broader implications including data collection, business models, and rights impacts to make informed choices


What are the impacts of using AI bots for therapy, particularly regarding emotional attachments and potential risks?

Speaker

Stephen Balkam


Explanation

Anecdotal evidence shows children and adults using AI for therapeutic conversations, but research is needed on the safety and effectiveness compared to human therapy


How can we address equity gaps in AI access and education between private and state-funded schools?

Speaker

Dr. Mhairi Aitken


Explanation

Research revealed stark differences in AI access and understanding between private and state schools, pointing to important equity issues that need addressing


How can we better understand and address parental awareness gaps regarding children’s personal use of generative AI?

Speaker

Maria Eira


Explanation

Research showed parents are aware of academic AI use but lack knowledge about personal uses like AI companions or seeking help for personal problems


What regulatory approaches can protect children’s rights in AI without violating other rights like privacy?

Speaker

Joon Baek from Youth for Privacy


Explanation

There are concerns that legislation aimed at protecting children in AI contexts might inadvertently compromise other rights, requiring careful balance


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.