Uncategorized
WS #294 AI Sandboxes Responsible Innovation in Developing Countries
WS #294 AI Sandboxes Responsible Innovation in Developing Countries
Session at a glance
Summary
This workshop at the Internet Governance Forum focused on AI sandboxes as tools for regulatory experimentation and innovation governance. Sophie Tomlinson from the DataSphere Initiative moderated a diverse panel of experts from government, business, academia, and international organizations to explore how sandboxes can help assess and govern AI technologies across different sectors.
Mariana Rozo-Pan introduced sandboxes as collaborative spaces where stakeholders experiment with technologies against regulatory frameworks, drawing parallels to childhood play with building blocks. The DataSphere Initiative has mapped over 150 sandboxes globally, demonstrating their expansion from fintech origins to AI applications across developed and developing countries. Meni Anastasiadou from the International Chamber of Commerce emphasized how sandboxes support the four-pillar approach to AI governance, particularly benefiting small and medium enterprises by providing safe testing environments before market deployment.
Alex Moltzau from the European AI Office discussed the EU AI Act’s incorporation of regulatory sandboxes, highlighting ongoing work with member states to develop implementation frameworks and cross-border collaboration mechanisms. Speakers from Africa, including Jimson Olufuye and Maureen, shared insights about the continent’s growing interest in sandboxes, with Nigeria developing frameworks for data protection compliance and AI strategy implementation.
Key challenges identified include resource constraints, the need for clear legal frameworks, transparency in eligibility criteria, and meaningful stakeholder engagement including civil society participation. Natalie Cohen from the OECD emphasized that sandboxes are just one form of regulatory experimentation, requiring careful consideration of policy objectives and exit strategies. The discussion highlighted sandboxes’ potential to build trust between regulators, businesses, and civil society while providing evidence-based approaches to governing emerging AI technologies responsibly across borders and sectors.
Keypoints
## Major Discussion Points:
– **What are AI sandboxes and why are they needed**: The discussion established that sandboxes are collaborative, safe spaces where different stakeholders (public sector, private sector, civil society) can experiment with AI technologies against existing or developing regulatory frameworks. They originated in fintech but are now expanding globally across sectors like health, transportation, and data governance.
– **Implementation challenges and resource considerations**: Speakers highlighted significant barriers including funding constraints, resource intensity for regulators, need for clear governance structures, eligibility criteria, and exit strategies. The discussion emphasized that sandboxes require substantial overhead for both regulators and participating businesses, particularly affecting SME participation.
– **Global perspectives and cross-border potential**: The conversation covered sandbox initiatives across different regions – from the EU AI Act’s regulatory sandboxes to Africa’s emerging sandbox landscape (25 national sandboxes, mostly in finance) to Asia’s health sector applications. There was significant discussion about the potential for cross-border sandboxes to address interoperability and international collaboration.
– **Stakeholder inclusion and civil society participation**: Multiple speakers emphasized the need to meaningfully include civil society and individuals affected by AI systems throughout the sandbox process, not just businesses and regulators. This was identified as an area needing improvement in current sandbox frameworks.
– **Trust-building and evidence-based regulation**: The discussion positioned sandboxes as tools to address mistrust between stakeholders and build evidence-based regulatory approaches for AI governance, with only 41% of countries trusting governments to appropriately regulate new technologies according to OECD data.
## Overall Purpose:
The workshop aimed to explore how regulatory sandboxes can serve as effective tools for AI governance, bringing together diverse international perspectives to discuss practical implementation strategies, challenges, and opportunities for using sandboxes to responsibly develop and regulate AI technologies across different sectors and regions.
## Overall Tone:
The discussion maintained a consistently collaborative and constructive tone throughout. Speakers were enthusiastic about sandbox potential while being realistic about implementation challenges. The tone was professional yet accessible, with speakers building on each other’s points and acknowledging different regional perspectives. There was a sense of shared learning and knowledge exchange, with participants openly discussing both successes and obstacles in sandbox development. The atmosphere remained positive and forward-looking, focusing on solutions and best practices rather than dwelling on problems.
Speakers
**Speakers from the provided list:**
– **Sophie Tomlinson** – Director of Programs at the DataSphere Initiative
– **Mariana Rozo-Pan** – Research and Project Management Lead at the DataSphere Initiative
– **Meni Anastasiadou** – Digital Policy Manager at the International Chamber of Commerce
– **Alex Moltzau** – Policy Officer at the European AI Office
– **Jimson Olufuye** – Chairman of AFICTA (Africa ICT Alliance), Principal Consultant at Contemporary Consulting
– **Natalie Cohen** – Head of Regulatory Policy for Global Challenges at the OECD
– **Moraes Thiago** – PhD Researcher at VWB in Belgium, also works at Brazilian Data Protection Authority
– **Jai Ganesh Udayasankaran** – Executive Director at the Asia eHealth Information Network
– **Participant 1** – Africa Sandboxes Forum Lead at the DataSphere Initiative (identified as Maureen/Amoturine based on context)
– **Audience** – Multiple audience members including Giovanna (Brazil Youth Program facilitator) and others
**Additional speakers:**
– **Bertrand de la Chapelle** – Chief Vision Officer at the DataSphere Initiative
Full session report
# AI Sandboxes for Regulatory Experimentation: A Comprehensive Workshop Report
## Introduction and Context
This workshop at the Internet Governance Forum brought together international experts to explore AI sandboxes in regulatory experimentation. Moderated by Sophie Tomlinson, Director of Programmes at the DataSphere Initiative—described as “a think-do-tank working on data governance and sandboxes”—the session featured representatives from government agencies, international organisations, business associations, and academic institutions. The discussion began with interactive Mentimeter polling, engaging participants on their associations with sandboxes and sector priorities.
The conversation maintained a collaborative tone throughout, with speakers demonstrating enthusiasm about sandbox potential whilst remaining realistic about implementation challenges. Participants built upon each other’s contributions, creating knowledge exchange that reflected the global nature of AI governance challenges.
## Understanding AI Sandboxes: Definitions and Evolution
### Conceptual Framework
Mariana Rozo-Pan, Research and Project Management Lead at the DataSphere Initiative, opened with a compelling childhood metaphor: “We often forget how we used to play when we were kids. And as we were children growing up, we were actually quite excited about experimenting and about thinking about building things, building them, and then kind of destroying them and building something new again.”
This framing established sandboxes as collaborative, safe spaces where different stakeholders—public sector, private sector, and civil society—experiment with technologies against existing or developing regulatory frameworks. Rozo-Pan defined sandboxes as environments enabling stakeholders to “craft solutions, experiment with technologies” in structured yet flexible ways.
### Global Expansion
The DataSphere Initiative’s mapping revealed significant global expansion, with Rozo-Pan noting they had identified “over 66 sandboxes that now is around 150” worldwide. This represents evolution from origins in financial technology to encompass AI applications across diverse sectors including health, transportation, and data governance. The expansion spans both developed and developing countries, indicating widespread recognition of sandboxes as valuable regulatory tools.
## Business and Industry Perspectives
Meni Anastasiadou, Digital Policy Manager at the International Chamber of Commerce, provided the business community’s perspective. She positioned sandboxes within a broader approach to AI governance, emphasising their particular value for small and medium enterprises (SMEs) that may lack resources for extensive regulatory compliance testing.
Anastasiadou argued that sandboxes are “particularly beneficial for SMEs,” addressing a critical gap in the innovation ecosystem. She emphasised that AI governance frameworks need to be “harmonised, flexible, and supportive of innovation while reducing compliance complexities,” positioning sandboxes as tools that can achieve this balance.
## European Union Implementation Framework
Alex Moltzau, Policy Officer at the European AI Office, provided detailed insights into the EU’s approach to incorporating regulatory sandboxes within the AI Act framework. The EU’s implementation represents one of the most comprehensive attempts to integrate sandboxes into formal AI regulation.
Moltzau explained that the EU AI Office is developing implementation frameworks in collaboration with member states, with a draft Implementing Act for AI regulatory sandboxes expected for public consultation in autumn. The EU approach emphasises that SME participation should be free according to AI Act provisions, addressing equity concerns raised throughout the discussion.
Moltzau positioned sandboxes within evidence-based policy-making frameworks, noting that “exit reports are crucial for dissemination and getting value from sandbox investments.” He also mentioned cross-border collaboration potential, stating that “cross-border sandboxes can facilitate extensive collaboration on transport, health, and other sectors between regulatory environments.”
## African Perspectives and Emerging Markets
Jimson Olufuye, Chairman of AFICTA (Africa ICT Alliance), provided insights into Africa’s engagement with sandbox approaches. He noted the continent’s growing interest in AI applications as countries develop their digital strategies, emphasising that “regional cooperation is essential for products with countrywide and regional benefits.” Olufuye also referenced the Global Data Compact (GDC) in discussing international cooperation frameworks.
Maureen, identified as the Africa Sandboxes Forum Lead, provided ground-level insights into practical implementation challenges. She highlighted two critical issues: funding constraints and legal authority questions. Regarding funding, she noted that “funding challenges exist, with potential solutions including cost-sharing models between affected sectors.”
More fundamentally, she observed that “legal backing for sandboxing authority is often unclear and needs to be established,” representing a significant barrier as many regulators want to establish sandboxes but are uncertain about their legal authority to do so.
## OECD Analysis and International Frameworks
Natalie Cohen, Head of Regulatory Policy for Global Challenges at the OECD, positioned sandboxes within broader regulatory experimentation frameworks. She provided crucial context with a striking statistic: “Only 41% of countries trust governments to appropriately regulate new technologies, showing need for evidence-based collaboration.”
Cohen emphasised that sandboxes “require significant governance resources, clear eligibility criteria, testing frameworks, and exit strategies,” highlighting substantial overhead involved. She noted the importance of avoiding market distortions whilst supporting innovation: “Governments need to consider funding support for SMEs and avoid creating market distortions while providing participation incentives.”
## Academic and Research Perspectives
Moraes Thiago, a PhD Researcher at VWB in Belgium who also works at the Brazilian Data Protection Authority, introduced a critical dimension: meaningful civil society participation. He argued that “civil society and individuals affected by AI solutions need meaningful roles before, during, and after sandbox implementation.”
His perspective emphasised that sandboxes should consider “individuals that are having their personal data processed or that will be affected by these AI solutions, regardless if the personal data has been processed or not.” This broader conception challenges sandboxes to move beyond business-regulator dialogues to include those most impacted by AI systems.
Regarding documentation, Thiago noted that “exit report authorship varies between companies and regulators, with flexibility in approach depending on context.”
## Health Sector Applications
Jai Ganesh Udayasankaran, Executive Director at the Asia eHealth Information Network (representing 84 countries with 2,600+ members), provided insights into health sector applications. He emphasised that health sector sandboxes can address “universal health coverage, interoperability standards, and cross-border data sharing needs.”
Significantly, Udayasankaran challenged traditional regulatory paradigms by advocating for sandboxes as “collaborative spaces with hand-holding support rather than just gatekeeping,” suggesting a fundamental shift from adversarial compliance checking to collaborative capacity building.
## Trust Building and Stakeholder Relations
Bertrand de la Chapelle, Chief Vision Officer at the DataSphere Initiative, provided a crucial intervention addressing underlying trust deficits. He observed: “there are key words that we don’t dare to use, but that are very important in this discussion. One is mistrust… And we have to recognize that in the last 20 years, a huge amount of mistrust has grown between public authorities, private actors, and civil society.”
He positioned sandboxes as “one of the tools that brings the capacity of dialogue, particularly when the discussions are taking place very early on,” framing them as trust-building mechanisms rather than merely technical regulatory tools.
## Key Implementation Challenges
### Resource and Legal Framework Issues
Throughout the discussion, resource constraints emerged as a persistent challenge across different contexts. The African perspective highlighted particular challenges in developing economies, while European experiences demonstrated that even well-resourced regulatory systems face significant overhead requirements.
Legal uncertainty about regulatory authority for experimental approaches creates barriers to sandbox development across multiple jurisdictions. Many regulators expressed interest in establishing sandboxes but lacked clarity about their legal authority to engage in experimental regulatory approaches.
### Stakeholder Engagement
The discussion revealed significant challenges in ensuring meaningful stakeholder participation, particularly for SMEs and civil society organisations. While there was strong consensus on the importance of inclusive participation, speakers identified multiple barriers including resource constraints and complex application processes.
## Audience Engagement and Questions
The session included significant audience interaction through Mentimeter polling and Q&A. Giovanna from the Brazil Youth Program asked detailed questions about exit reports and documentation processes, highlighting young professionals’ engagement with sandbox development.
A representative from Vietnam inquired about policy packages and legislative features, demonstrating global interest in practical implementation guidance.
## Areas of Consensus and Disagreement
### Strong Consensus
The strongest consensus emerged around sandboxes’ collaborative nature, requiring meaningful participation from public sector, private sector, and civil society actors. All speakers agreed on the need for special SME support, including free participation and funding assistance.
There was universal acknowledgment that sandboxes are resource-intensive endeavours requiring careful planning, adequate funding, and proper documentation.
### Key Tensions
Speakers differed on implementation approaches, with some advocating supportive, collaborative approaches while others emphasised rigorous evaluation and market neutrality. Different regional perspectives proposed varying solutions to resource constraints, from cost-sharing models to government funding responsibility.
## Future Directions
Several concrete action items emerged, including the EU AI Office’s draft Implementing Act for public consultation and continued collaboration through the DataSphere Initiative’s coaching and master classes. The OECD committed to developing toolkits for sandbox implementation.
## Conclusion
The workshop revealed remarkable consensus on AI sandboxes’ value as tools for regulatory experimentation and innovation governance. Despite diverse geographical and institutional perspectives, speakers demonstrated strong alignment on fundamental principles including collaborative approaches, SME support requirements, and the value of cross-border cooperation.
The discussion successfully addressed broader challenges of trust-building and institutional legitimacy in technology governance. The recognition that sandboxes serve trust-building functions beyond their immediate regulatory purposes provides important context for understanding their growing global adoption.
Key challenges remain around resource allocation, legal framework development, and meaningful stakeholder engagement. However, the strong consensus on fundamental principles provides a solid foundation for addressing implementation challenges through continued collaboration and knowledge sharing.
The workshop’s collaborative tone and constructive engagement across different perspectives suggests that the sandbox community has developed effective mechanisms for knowledge sharing and mutual learning, potentially serving as a model for broader technology governance challenges requiring international coordination.
Session transcript
Sophie Tomlinson: Hello everybody and welcome to this workshop on AI sandboxes. Thank you so much for choosing to spend what must be your morning with us. My name is Sophie Tomlinson, and I’m the Director of Programs at the DataSphere Initiative. For people who aren’t familiar with our work, we are a think-do-tank working on data governance and sandboxes, working with businesses, governments, and civil society on how we can responsibly unlock the value of data for all. We’re here today to talk about how sandboxes and different types of experimental regulatory approaches can help us in using AI, in assessing whether we want to or need to use AI, and also approaching these governance questions that we face as we see AI penetrating different types of sectors. So what I’d like to just share with you before we get started is a QR code to a Mentimeter that we will be running. Please check out the QR code and go to the first question we have for you because we’d love to get your insights. We have a very diverse and an exciting panel today with many different speakers, and as you can see from this list we have a couple of people online, but also in person with us here in Oslo at the IGF. I’m going to introduce them as we go through the session, but as you can see all their names here. So first of all, what I’d like to start with is what is a sandbox, and what do we know about this as a concept and a tool for tech development and policy innovation? I’d like to hand over to Mariana Rosopaz, who is the research and project management lead at the Data for Initiative to give us a first look at what sandboxes are. and their potential for AI. So, Mary, over to you.
Mariana Rozo-Pan: Thank you, Sophie. And hi, everyone. Good morning, good afternoon, good evening. We are very excited about hosting this workshop. I think it’s like the third workshop that we host at the IGF focused on sandboxes. And for those that are here in person, I’d actually like to see a little show of hands of who here played in a sandbox as a kid, or maybe with Legos, with building blocks, building things. I see laughs and hands going up, even from the technical team, which is exciting. Well, I did, too. And I was actually quite obsessed with playing with Legos and building things. And one of the things that we realized when it comes to governing data responsibly and emerging technologies responsibly, is that we often forget how we used to play when we were kids. And as we were children growing up, we were actually quite excited about experimenting and about thinking about building things, building them, and then kind of destroying them and building something new again. And that flexible, agile mindset that maybe we had when we were children is what we’re often lacking when it comes to building agile regulations and shaping how we’re governing technologies, building technologies, and addressing the complex challenges that we’re facing nowadays. So, sandboxes, I would actually like for us to go to the mentee. Could we look into the answers of that first question that we had? Thank you. So, I’m seeing that people are answering collaboration, solution. That’s what comes to mind when you hear sandboxes, which is an exciting response, I must say. And that’s actually what sandboxes are all about. It’s about flexibility. It’s about collaborating. So, sandboxes are collaborative spaces, safe spaces for collaboration in which, by nature, different stakeholders come together to craft solutions, experiment with technologies. There are different types of sandboxes, as we will be more than happy to share more of later, but regulatory sandboxes are those in which the different stakeholders, the public, the private sector, and hopefully also civil society, experiment and test technologies against an existing or an in-development regulatory framework. And operational sandboxes are those in which different stakeholders test with the data or with existing technologies. Sandboxes can also be and they are all hybrid and we can go more into that. They were originally created within the finance sector to test financial technologies and they are now being used across sectors for AI, for health, for transportation and in many other use cases. And they are a promising methodology in the end that has already been implemented, again, across sectors and is being pretty effective in driving innovation and ensuring that we are doing things as we were when we were growing up. So I’m seeing very interesting responses, testing, collaboration, solution and if we go back to the slides that we had, I also wanted to share at the Datasphere Initiative we have been doing intense and extensive work around sandboxes and we’re sharing here our sandboxes for AI report which is our latest report focused on the potential of sandboxes for AI. We have a mapping that has identified over 66 sandboxes that now is around 150 focused on different topics and particularly on AI innovation and here you can also see a map of the distribution of sandboxes across the world which is a very exciting and interesting methodology that’s being implemented not only in developed countries but also in developing economies and in countries throughout the global south, in Latin America, in different countries in Asia and in Africa. So we’re seeing that this is a tool that’s proven interesting, successful and powerful when it comes to testing bold ideas in collaborative and safe spaces. And at the Datasphere Initiative we also have a methodology on how to do a sandbox that includes not only thinking about how to do them responsibly but about responsible design, effective communication and engagement and making sure that it is not only a space where specific startups or private companies have access to resources and testing and iteration but it’s also a space that in the end creates public value and translates into better technologies for our society in general. So that’s a bit of a snapshot of what we do and back to you Sophie for our interesting conversation today.
Sophie Tomlinson: Thank you Mariana. So why sandboxes for AI in particular? This is what we want to talk about now in this first session. And I’d like to welcome Meni who is the Digital Policy Manager at the International Chamber of Commerce to share her thoughts on this. So Meni, you’re working at ICC with businesses from all around the world across all sectors. What do you think are the types of AI governance approaches that are needed and how could sandboxes play a role in the context of AI? Thank you. Sorry, just taking this off.
Meni Anastasiadou: Thank you so much Sophie and many thanks for the wonderful invitation to participate at this session today. I am Meni Anastasiadou, I’m the Digital Policy Manager at the International Chamber of Commerce. For the colleagues that might not know us, we are the institutional representative of more than 45 million businesses across 170 countries. So we really have an inclusive membership that goes beyond sectors and geographies. So AI is really an incredible tool. We see it transforming industries all over the world and really providing productivity gains and improving efficiency and lowering costs for various different sectors and again shapes and sizes of businesses. So we also see this as I’ve mentioned being especially true for SMEs which are the backbone of the global economy and how we govern AI particularly impacts SMEs. So I really like the presentation that showed earlier how we should consider AI governance approaches that are fitting to multiple different sizes let’s say of stakeholders including of course businesses and this really speaks to the fact that we should be mindful of the approaches that we take when we’re talking about AI governance to ensure that we are inclusive and supportive to innovation. So to that point particularly, ICC has put forward a proposal for an AI governance framework that we call the four pillar approach. We publicize it through the ICC narrative on artificial intelligence in September of last year and the thought process that we present around this is that in order to ensure that we sustain the use of AI in a safe way that benefits different sectors, economies around the world, we really need to make sure that AI governance frameworks are harmonized with existing global agreements so that they don’t really create a patchwork of regulations which as we know makes it particularly challenging And this can help reduce barriers and compliance complexities. We should also make sure that AI governance frameworks are flexible and do not hinder investment. And of course that they create at the end of the day favorable commercial conditions that can support entrepreneurship. So back to my point on ICC’s four-pillar narrative or four-pillar approach to AI governance. So what we want to show is that by adhering to the ideas that I mentioned earlier, all businesses can really harness the power of AI to drive innovation and ensure compliance and build trust. So if we align AI governance with that in mind, we can really ensure that everyone is equipped to harness AI and accelerate their growth. Now regulatory sandboxes are really a great tool that actually respond to this framework of governance and they can really enable the safe and real-world testing of AI systems and particularly for SMEs. Mariana, you spoke earlier about how sandboxes were first used in fintech, but since then we have seen how their use has spread to cover other areas and covering an inclusive set of geographies. So I really like the mapping that you’ve shown us earlier, which really speaks to what important of a tool AI sandboxes are to the trustworthy and safe AI governance model. So just to give you even an example and speaking to the use of sandboxes and how those are effective for SMEs, just as we know how engineers, when they are in aerospace, they’re always testing on the ground how an airplane works and to make sure that it’s safe before it actually flies. And it’s the same idea, the same principle, making sure that we bring together all stakeholders to have the time to test if AI works, what are the right safeguards to apply, what are the right principles and guidelines to make sure are in place, that we make the complete use of all the benefits that AI has to offer. And this can eventually then help all deployers, developers, and users of AI to really take off and can help eventually also SMEs take off when they’re used just as airplanes do. So perhaps maybe I can stop here.
Sophie Tomlinson: Thank you for that analogy, I think it’s really helpful. And I’d like to move on to Alex Mosul, who’s the Policy Officer at the European AI Office. Alex, your role at the AI Office has a component which is very much focused on sandboxes. Could you share a bit of background on the role of sandboxes in the EU AI Act and why you are thinking of how we can actually use these tools?
Alex Moltzau: Yes, of course. I would be happy to, Sophie, and also thank you so much for those considerations, Manny. It’s really important, as you say, that we create favourable commercial conditions. I think it’s this balancing act of responsibility, but also innovation and how we get that right. Because as citizens, we want great products, but also safe products and services. So I’m just going to spend three minutes to talk about three things. The first is a bit of my story, and the second is this balancing act, and the third, what are we doing in the European Commission and this implementing act that we are working on. So I have a background as a social data scientist and also with artificial intelligence in public services. So I worked five years with AI policy nationally in Norway with the research community, with machine learning, artificial intelligence and robotics. However, I was also involved a lot of the time in this sandbox we had in Norway for privacy, but that had exclusively AI cases. and a lot of exit reports that you can find on the internet if you search for the privacy sandbox in Norway. So being here in Norway and being back, I now work in Brussels, and I’ve been there for one year with my family, working in the European AI office, and it’s been quite a journey to start a new place. But I have to say, it’s a really wonderful place to be if you’re interested in AI policy and law, and it’s brought me to think about the whole European region, right? And how do we get this balancing act right? Because I think as a region, we have an approach, we have certain values that we aspire to, and for us, I think we want to be treated in the best way possible, as citizens, as co-workers, as part of society. So I think it’s the case that if we want to have responsible innovation, we need an evidence basis to inform that policy, right? So if we don’t learn this regulatory learning, kind of like there are regulators that are building their competence on AI as we speak, and try to see what is the right way to ensure that we get this innovation that we want, but also in a way that fulfills citizens’ needs, and it’s not just based on a buzzword, or based on a promise that is unfulfilled. So to not waste money and time, we have to make sure that products actually work as intended, and the sandboxes are, I think, a really good mechanism, a good policy mechanism to do this. So what are we doing in the European Commission right now? We are really working together with member states. We have an AI regulatory sandbox subgroup under the AI board, so we work with member states on a very regular basis. We are writing an Implementing Act for AI regulatory sandboxes, and we are supporting the rollout of the sandboxes across Europe with the Coordination and Support Action EU-USAIR. So I think in that sense, there’s a wide range of things that we are doing, but right now it’s just sort of like, what frameworks are we looking at? And I have to say, in the autumn as well, we will be putting out this, because that’s part of the democratic process, for you to comment as well. So I just encourage everyone who is listening to keep track of when we are releasing this draft Implementing Act, so that you also can tell us about your opinion, because we are not the arbitrators of knowledge in a sense that we just want to understand how to do this in the best way possible, right? So it’s not necessarily true that Europe has all the best solutions. I think we have to look globally at how we can do this together, which is also why I’m here today.
Sophie Tomlinson: Thank you. Thanks so much, Alex. And as Mariana shared on her map, which maps different AI and data sandboxes around the world, sandboxes for digital policy challenges and new technologies aren’t just being looked at in Europe. It’s also a pretty global tool that’s being explored in Asia, and notably Singapore, also in Latin America, Brazil, Chile, and also in Africa as well. The DataSphere Initiative has a whole component on Africa and sandboxes, the Africa Sandboxes Forum. And as part of this work, we hosted a co-creation lab in Abuja, Nigeria, as part of the African Data Protection Conference that took place in May. And Nigeria itself is looking at developing a sandbox in the context of their data protection law as a way to help companies comply with this new data protection law. And I’d like to bring in Jimson, who is the chairman of AFICTA, which is one of the biggest private sector organizations in Africa and bringing together companies from all across the continent. Jimson, you were there in Abuja and have been working on regulatory innovation and technology for many, many years. Why do you think a sandbox is an interesting way to develop policy and innovation, and could you share a bit about how Nigeria is also thinking about this?
Jimson Olufuye: Thank you very much, Sophie, and good morning, everybody. My name, again, is Jimson Olufoye. I’m the chair of the Advisory Council of the Africa ICT Alliance, AFICTA. AFICTA was founded in 2012 with six countries in Africa, and today we cover about 43 countries in Africa. Our members are ICT associations, companies, and individual professionals across Africa. Well, that is my volunteer work, and for my day work, I run Contemporary Consulting. I’m the principal consultant there, and we’re into data center management, cyber security, integration, software, and research. So it’s really a great pleasure to serve in that capacity of our FICTA chair, our founder at that time, and even right now, I’m still very much involved in it. And I’m very, very happy to be associated with DataSphere and with the topic indeed. Yes, Sophie, we had a great event in Abuja last month. It was really spectacular. We had Morrayne, RISPA, and of course, you also did virtually. So it was a great capacity development event, and I want to congratulate you for that. And I can see Bertrand right there in the public. I appreciate all your work, really. You’ve been on it for quite a while, veteran indeed. So thank you for what you do. The concept of sandboxes is very, very important, very, very relevant, and very, very appropriate, especially in the AI sphere, because we all know that AI is the main thing, the hidden thing, and many people are concerned about kind of ramifications of AI, maybe for ARM. But we believe there’s a lot of good in it if it is properly regulated going forward. We do know that the whole essence of even our gathering here, IGF, as part of WSIS, is so that we can have a people-centered, inclusive information society. And the information society is still evolving, and AI is going to play a very, very important role, and that’s why that workshop, I was happy that I was part of it. We had regulators there. We had the Nigerian Data Protection Commission was there fully, and also the NCC, Nigerian Communication Commission, and also companies, AI companies, civil society people, academics, and quite a number. And it was really very rich. We look at the three aspects indeed, operational, regulatory, and hybrid, and we had case studies. That was quite interesting. And so the meeting aligned with even the expectation of the participants and the broad stakeholders in AFICTA, simply because Nigeria has just evolved its AI strategy. AI strategy was evolved, bringing all stakeholders to work on it. Now we’re going to AI law, and we need a regulation, and also it must start with data governance, basically. And that’s why the Nigerian Data Protection Commission took this very seriously, and they have considered that indeed they’re going to adopt it because it will help with proper regulation. Even some of us that develop applications, we also learned a lot that we can actually use it to be beneficial in terms of market reach, in terms of the kind of product we need to design, in terms of what customers want. And even there, we of course knew that the Central Bank of Nigeria actually adopted some form of sandboxes, even to regulate the financial sector. So it’s a rich concept, and I think we need to enrich it more. We need to keep the conversation going. Because right now, just less than 10 African countries have AI strategy. Less than 10. And from AI strategy, we need to move to AI regulation, and regulation is very key to direct products, because we don’t want products that are for harm. We want products that are for good, that will be beneficial to the people, and also bridge the digital divide, which is the main idea with WSIS and also with GDC. and of course for Sustainable Development Goals. So this lines up with WSIS GDC and the expectation for the achievement of Sustainable Development Goals. So I think that fully aligns with it. We will continue to support the advocacy and the engagement so that regulators can do the right thing and also our members too can know what is expected of them concerning their products. Thank you very much for this opportunity.
Sophie Tomlinson: Thank you so much, Jimson. Thanks for also putting what we’re talking about here today in the context of the wider world of Internet governance too and within the WSIS process. So now I’d like to actually just go back to the Menti again, if we can have a look at the second Menti question we have for all of our participants. This question is, we’d like you to have a think about the types of sectors and areas where AI is being applied or also the data governance point that Jimson mentioned is in there as an option. But what issue do you think would benefit most from AI sandbox? So what kind of sector do you think could be most helpful? So while those results come in, what I’d like to do now is move on to the next part of the discussion, which is looking at when and how do you actually do one of these sandboxes? We’ve heard in this first part a lot of excitement and interest of the potential for these tools. But if you’re a government and you’re actually starting to think, okay, how do I, do I have the resources to design and set up a sandbox? Or if you’re a company thinking, what are the incentives that I have to actually participate in a regulatory sandbox or set up one myself that could perhaps be an operational sandbox? Where do you start? I’d like to bring in Natalie Cohen, who’s the head of regulatory policy at the OECD to start us off. Natalie, what would you say, based on your research and what you’ve been doing with a diverse group of governments, what is the role of sandboxes within the regulatory process and what sort of challenges might governments face?
Natalie Cohen: Thank you very much, Sophie, and thank you for the opportunity. here today. Just to clarify, I’m Head of Regulatory Policy for Global Challenges specifically, and one of the things that I’m looking at is this answer of how do you regulate for new technologies and how do you regulate in a way that is innovation-friendly? And the OECD’s answer to that, I would say, is the R2021 recommendation on agile regulatory governance to harness innovation. And as part of that recommendation, we have a big focus on regulatory experimentation, and that sandboxes are just one aspect of regulatory experimentation. So I think a first consideration for a government is to look at the specific policy objectives they want to achieve and then what is the best way for them to achieve that, because regulatory experimentation can also mean just policy prototyping, it can mean innovation testbeds, or it can just mean using piloting powers to test different processes. So we think regulatory experimentation is really important, it helps policymakers, regulators and industry come together in a collaborative way, as Mentimeter pulled out, to shape and improve regulatory environments in a way that manages the tensions that can be created between regulation and innovation. And we think sandboxes are particularly well suited to regulatory experimentation, where companies are more towards the stage of early commercialization or on the point of bringing something to market, and they want to influence the regulatory framework around that and remove barriers to actually accessing the market, whereas some other forms of experimentation like innovation testbeds might be more around proof of concept. And as has been mentioned, sandboxes are not new, they have been around for a while and they have been used with success, specifically in the fintech sector, but at the OECD we kind of have two aspects to our work, we provide tools and guidance to help governments develop and build sandboxes, and we also provide technical assistance and support to countries to set up a sandbox, but sometimes also to fix a broken sandbox. So one thing I would like to say is they’re not always the perfect answer, they can be quite resource intensive to manage, they do require governance resources and they do have certain elements that need to be in place to ensure success. So for example, governments need to think about the eligibility criteria for what kinds of businesses and innovations they want to test and make sure that that is transparent, they need to be clear about the testing framework and the evaluation process that will be in place to make sure they actually have good evidence that can then go on to influence regulatory policy, and they also need to have a kind of exit ramp, so at the end of the sandbox, when do you close it down and what is the route for companies to then actually bring products and services to market on the back of that. So all of these things can So, we can require a lot of overheads both for the regulators who need to be funded and resourced and have the capability to manage that process, and also for the participating businesses. So, sometimes one thing governments need to think about is also providing the funding support to businesses, particularly if they want SMEs to come and participate. Some successful sandboxes have been successful in terms of testing products and services and bringing them to market, but they’ve been primarily successful with larger corporates. And so, sometimes what SMEs need support with is part of accessing, could be accessing data, it could be legal and compliance resources as well. So, that’s another thing to think about if you want to create a diverse and sustainable approach to sandboxes. So, I’ve mentioned a couple of the kind of the key issues around things that countries need to think about there. There are various functions to manage within sandboxes around the impact on competition and innovation. So, regulators and policymakers will be keen not to create market distortions, not to kind of overly favor the participants that play in sandboxes, while at the same time there need to be incentives for businesses to participate. They need to have some kind of benefit, whether that’s accelerating their route to market or providing them with enhanced support around some of those resourcing and funding considerations that I have mentioned. So, the OECD is in the process of publishing a toolkit on how to develop and design sandboxes that will come out in the coming weeks. And as I mentioned, we provide technical support to both members and non-member countries. So, we’ve done work on Croatia that has led to the development of this toolkit and we’re about to start a project with Portugal on one of their sandboxes too. So, I think another thing is countries might also need advice and support on how to deploy these things and that’s where they can reach out.
Sophie Tomlinson: Thank you very much, Natalie. Very helpful and lots of points you made that I really wanted us to come back on. So, we do, can we just get the results of the Menti to see the different sectors people thought could be particularly useful for AI sandboxes? Okay, yeah. I guess finances may be not surprising since thinking of how sandboxes kind of originated as a concept, but we can see health as well being a big one, which is good because we’re going to have some discussion on health a bit later in this session. Now, I’d like to bring into this conversation Moraes Thiago, who is a researcher at VOB in Belgium. Thiago, you’re researching sandboxes around the world and you also have some experience yourself participating and designing one. From what Natalie was saying in terms of some of the challenges that governments can face in actually setting up a sandbox or really deciding. whether or not this is the right type of regulatory experimentation tool. What could you say in terms of how governments can best manage resources to set up a sandbox, include transparency, maybe also bring in different types of stakeholders like civil society? Could you share some of your thoughts on this, please?
Moraes Thiago: Yeah. Hello, everyone. And thanks, Sophie, for the invitation, the invitation. It has been very nice to be engaging with the data sphere and other colleagues that I see here in several forums. And definitely being the IGF, it’s definitely relevant for such a topic. Just before starting as a very clear disclaimer. Yeah, I’m speaking today as a researcher, a PhD researcher at VWB, as you mentioned. But some of you might know me as well as a practitioner from the Brazilian Data Protection Authority. Today, I’m not speaking on behalf of them. But of course, as part of my role there, I’ve been working with several colleagues to launch a pilot sandbox. And hopefully there will be news on that soon. So, yeah, it will be a nice way to see how a young authority is dealing with this challenge of establishing something that can be very resource intensive. But at the same time, it is manageable if some care is taken. And maybe my comment then will be to complement a bit what Natalie said, but also to show the other side of the cup. It’s true that sandbox is not the only experimentation tool. And I think any regulator that wants to establish one has to think and consider if, how, why, right, when. All the questions that we’re discussing here to establish a sandbox. But one thing that we have also learned and I’ve seen based on the experience that different jurisdictions have been doing, is that it’s quite common when you’re still testing the waters, you sandbox the sandbox. So basically, you create a pilot. And these pilots, several times, you deal with the resource that you have to decide the scope and how broad your sandbox will be. This means, for example, if you rely more on your internal staff and the expertise they already have, or if you will have some kind of partnership or some specific experts, consultants, like contracting. So all of this will depend on your conditions, of course. But there are several institutions that are actually supporting such initiatives. So, for example, at international level, we have like development banks, like in Latin America, we have CAF. There’s also CEPAL and other institutions around the globe are also trying to somehow engage in support. So this is a way of dealing with a bit of this challenge of limited resources. So in the end, the word cooperation is really important here. So it makes a lot of sense to be in the IGF discussing about that. And maybe as this first part of my speech, one other thing that I believe it’s very important to consider as you frame who will be your co-partners in this endeavor and how you’re going to establish, for how long, because some sandboxes can be quite short. there are even cases of like three months sandboxes, there are others that go very long, like five years, but in general, and there’s a lot of global reports on that and academic research on that, that shows that on average we are aiming for like six months to one, two years, it really depends on the goal of what you’re testing, right? And you can actually have also flexibility of how many projects, how many use cases you’ll be dealing at the same time. So all of this is part of the design of the sandbox, very important to consider. So my last comment for now would be to touch also what Sophie just mentioned, that many times when we’re talking about several stakeholders that are being engaged, either participants or partners, we actually forgot many times the role that civil society has here, especially when we are now moving to this arena of sandboxing in AI and sandboxing AI in several circumstances. We’re talking about individuals that are having their personal data processed or that will be affected by these AI solutions, regardless if the personal data has been processed or not. And because of that, I think it’s very important to also hear the voice of these individuals and maybe this is something that we need to improve in our framework. So what will be the role throughout all the sandbox experience? Because the civil society and individuals, they might have an important role before, during and after the sandbox is done. And this is actually what I’m researching right now. So for now, I only bring this as a provocation, but I hope in the future, as we continue engaging in this for us, I may be able to also share some insights of what I found, the potential role of civil society here. But I would be glad to know other colleagues’ comments on that. Thank you
Sophie Tomlinson: Thank you so much for that, Tiago. You covered a lot, which I think we’re going to have time to come back to. I’d like to also bring in now Maureen. and Amoturine who is our Africa Sandboxes Forum Lead at the DATS4 Initiative. Morrayne has been doing a lot of work researching in Africa how sandboxes are being used, interviewing private sector who have been participating in sandboxes, also government setting them up. And Morrayne, could you maybe give us a bit of a, picking on also what Natalie was saying in terms of some of the barriers and thinking that governments are needing to do on how to actually go about setting up a sandbox and also thinking of the companies themselves joining. What could you say from Africa has been some of your lessons as you’ve been researching this and could you also mention a bit about the types of training and support that the DATS4 Initiative is also providing to governments who are planning to set up sandboxes?
Participant 1: Sure, happy to Sophie, thank you so much and I’m really glad to be here and to see you all who are participating. So, let me start by just sharing a few numbers. So, we’ve looked into sandboxes in Africa and overall, at least from the last time that we updated the mapping, which was sometime earlier this year, we have about 25 national sandboxes. And of these, 24 are in the finance sector. So, which would mean that authorities and public or what we call government authorities are now starting to get into sandboxing. So, it’s a new space, a space which they have to identify quite a number of the core elements of sandboxes. The beauty is from the conversations that are happening on the continent, regulators have really embraced the idea of experimentation when it comes to regulating these emerging technologies like AI. And so they’re really embracing the idea of sandboxes. But from what we are learning is there are still questions, really, when it comes again to the core elements of, you know, the how, who, when. And, you know, all the details that go into sandboxing are some of the things that they are grappling with, because when you realize that it is a new space that they are getting into, because sandboxes have largely been used in the fintech sector. And so part of what we have been doing is, of course, learning from what is available online. Who is sandboxing? Do they have lessons to share with those who are getting into the space of sandboxing? And that we have documented in the report that Mariana shared earlier, the Outlook Africa Sandboxes Outlook report. But we are also going ahead to engage with stakeholders and largely, so far it’s been largely regulators, but we are also starting to engage with private sector and other types of stakeholders to now start thinking about the core elements of sandboxing. I mean, things like the scope of a sandbox, who are the people you’re going to work with, the actors and the stakeholders, then the legal models under which authorities can sandbox, because that is also not yet clear. But also now looking into the resources, which is a huge part of sandboxing, because you will not, we have learned that a number of regulators are indeed grappling with the idea of how does a sandbox get funded? Where does the funding of a sandbox come from? So those are the areas in which we are trying to engage with people. And the idea of raising funds for sandboxing has really had different approaches in different places now. But also what we’re doing as we’re doing the activities of the Africa Sandboxes Forum is we are learning a lot from sandboxes that are being operated outside of Africa to see what has worked. And so you will notice that some sandboxes that are run by public institutions or authorities either have their funding coming from the core operations of an authority, say it’s a data protection authority. So there is that. But what. What we are exploring in Africa, because that’s not yet the case that there is co-funding for experimentation in some of these authorities, is under the co-creation activities that we are doing with different stakeholders, is to get people to put themselves in the shoes of someone setting up a sandbox and think about, okay, who would this sandbox affect when it comes to other regulators or other sectors? If it’s a data protection authority setting up a sandbox, then they are thinking about what other sectors is this sandbox going to affect and can we bring these regulators in and think about some cost-sharing models that are, of course, where there is a shared benefit but also shared costs for the different sectors that are involved in such a sandbox. And then the other thing that we are also trying to make sure that we brainstorm around with stakeholders is the legal models under which they can sandbox, because sometimes it’s not clear and we learned that it’s actually one thing that regulators grapple with. Sometimes it’s not clear that they actually are authorities allowed to sandbox in any way. So, how do they look to find that legal backing to carry out such an experimentation? And if it’s not there, then how can they go about that? So, these are questions that most regulators and stakeholders have not yet started thinking about while they know they want to sandbox. Thinking about how to see how do they actually approach it has been a challenge and that we have seen in a number of people that we’ve been co-creating with. And drawing from, say, the Chigali co-creation lab that we did, we learned from stakeholders that they really would love to use sandboxes to sort of understand. and Jai Ganesh Udayasankaran. So we are trying to understand if indeed some of the hype around AI, for example, is true for Africa, understand the real value of what some of these emerging technologies are bringing so that they are able to take them to the next level. And so part of that is really what’s been taking our time in Africa to sort of engage stakeholders and understand where they are at and how ready they are to implement them. I just wanted to mention that part of what we’re doing are, of course, group co-creation activities, but we are also offering services such as one-on-one coaching journeys for someone who is ready to sandbox, and they want to navigate the journey of all the core elements of sandboxing that are not necessarily direct. That is also what we’re doing. We are conducting master classes with groups of stakeholders that are ready to learn how to technically run a sandbox. That is also part of the activities that we are looking into, because the need for sandboxes is already there and has already been recognized by regulators. So now what’s really missing is that push to the next level. So working with them into creating these sandboxes and navigating these challenges around resources, which we know are key almost everywhere.
Sophie Tomlinson: Thank you, Maureen. Thank you. Thank you, Maureen. Thanks for also sketching out the different activities and sandbox support that we can also provide at the Datasphere. So I’d also like to, first of all, just to mention, if people in the room want to make a comment or a question, there’s two microphones either side of the room. If you want to go over there, please feel free. While people have a think about that, I’d also like us to bring up the next Menti question we have as well, which picks up on some of this discussion about we want to collect the kind of challenges and barriers that people may have when they’re thinking of whether and how to. And now I’d also like to bring in a perspective from the health sector, and I think this is timely because many of you highlighted this as a key sector where testing AI technologies and policy through sandboxes could be useful. So Jai, I can see you’re online, Jai’s connecting here from the Asia eHealth Information Network. Could you talk to us a bit about where you think the potential for sandboxes are in the health sector, and could you particularly touch on how they could be useful in a cross-border context as well, because I know that you’re doing a lot of this in terms of your convergence work at the Asia eHealth Information Network, so over to you Jai, please, and Jai is the Executive Director at the Asia eHealth Information Network. Jai, can you hear us?
Jai Ganesh Udayasankaran: Yes, Sophie, thank you.
Sophie Tomlinson: Great, thank you.
Jai Ganesh Udayasankaran: First, I would like to actually quickly introduce about our network, which is Asia eHealth Information Network, which is a regional digital health network with core focus on the Southeast Asia and Western Pacific in terms of the World Health Organization regions, but we do have members across 84 countries, over 2,600 members from 84 countries. Our primary focus is capacity building and then also support for the national digital health programs that we work with the governments in the countries and supporting them in terms of the core health information building blocks, and then supporting them in terms of the gaps that are existing currently in terms of the governance, architecture, people and program management, standards and interoperability. So I think many of the speakers ahead of me have mentioned about various challenges, so one of the… challenges is like who should be involved in the sandboxes and then who is actually qualified to take the decisions of course the regulators mostly and then also from the government point of view they usually are the ones who actually start or decide on the sandboxing criteria but then like we have also had recent discussions how like I think Thiago also mentioned about it how civil society could actually participate meaningfully but then like coming back to your primary question like we have seen like we work extensively with countries now like we have official representations from 15 countries and two more are likely to join in what is known as the working council working council is nothing but representation from countries which advise the AHINTS board of directors as well as our operations in the region so we have seen sandboxes in health sector especially on AI, telehealth and then also for data governance and data sharing but three things have coming up in the recent times one is of course many countries do have the universal health coverage programs where the sandbox environment actually helps them to get the applications developed by private sector also to be getting on the mainstream as long as they confirm to the standards that are set by the regulators in most of the countries as we are aware the digital developments have been very very very fast paced whereas the regulations especially from the health sector has been decades old they are still you know there’s this pacing issue of catching up with the developments in the regulatory space so in these emerging technologies does need the support in terms of the sandboxes but in many countries this is also my experience that they don’t necessarily say a regulatory sandbox sometimes they call it testbed sometimes it’s a living lab testing environment, and then also regulatory sandbox. So many of them use multiple terms, depending on the priorities and the local needs. So the three most sought after needs are, like to get applications, different applications, health applications developed even by private sector into the national mainstream, confirming to the regulations where the regulations are currently still not there. And then also to shape the regulations and policy space. And then the second one is on interoperability because most of the solutions need to be actually interoperable. There are standards, but still there are sandboxes that are set up to make sure that the solutions actually confirm to the standards. So interoperability is the second use case. And the third one is about the data. We have occasions where there are countries from which, like for example, medical tourism, as well as like people going for treatments in other countries. So there is a need for information to be shared at the same time in a very responsible way. So these are the broad areas in which we see sandboxes in our region. And then, in fact, like we do have a country currently discussing with us their need for sandboxes and then probably support. And they have expressed several challenges also. So in fact, we look forward to work together with Datasphere and other partners, especially those who are willing to support us in terms of the funding and capacity building in this space to work together. I hope I answered your question. If not, please let me know. Thank you, Sophie and colleagues.
Sophie Tomlinson: Thank you so much, Jai. I think that really provides a good kind of snapshot of the different types of considerations, people and experts in the health sector, especially in the Asia region are thinking when it comes to how we can make the most of these different types of technologies for interoperable and cross-border health approaches. We’ve got also now some of the different points that we’ve heard from the Menti in terms of the different barriers. I’d also like to just note a question, a very useful question that we’ve had on the chat from a representative from the Institute for Policy Studies and Media Development in Vietnam. This is a think tank specializing in digital technology policy. Their question is around how do countries design policy packages and govern sandboxes? Should AI and data sandboxes be structured separately or integrated? And what types of legislative or regulatory features have proved the most effective in making participation more accessible to businesses and especially SMEs? So as we go now into our final set of interventions from speakers, I can’t see that anyone in the room wants to make a comment, so I’ll keep going. I’d like to invite Alex again to share some of his reactions to what he’s been hearing throughout the discussion today. The types of barriers that people have looked at when it comes to designing sandboxes and also linking to the cross-border potential of sandboxes that Jai was talking about. Could you tell us a little bit about how you’re also thinking of this in the context of the EU AI Act as well? That could be helpful. Over to you.
Alex Moltzau: Yes, of course. It’s really great to listen to all these different perspectives. And I think I could start with the last questions, I mean, especially relating to kind of like how to facilitate SMEs and startups. And in the EU AI Act, it is fairly explicit as well that the participation for SMEs and startups should be free. So I think this is kind of like, I guess, one mechanism, of course, if it’s if already a startup and SME or SME have kind of overheads, then it could, of course, be challenging to participate. And I think. I think Thiago’s questions about civil society, I think one of the really wonderful things about sandboxes is that and this is also a conception in the AI Act that we have these sort of exit reports so I think like dissemination activities and involving a broad set of stakeholders in kind of thinking about what did we learn, you know because this is, I think OECD outlined as well that this can have a cost, you know so to get value out of the money that is being spent on these sandboxes I think one should not ignore the importance of dissemination activities so I mean in many ways sandboxes were created as a measure to try to ensure responsible innovation and I think just talking about what is the irresponsible innovation or potential for irresponsible innovation, right and in the finance sector, you know, with 2008 and the financial crisis, you know I mean collateralized debt obligations So, there’s something wrong with the sound I’m sure they’ll fix it Okay, things are better Let’s see, this is now hopefully better Yeah, that sounds good Okay, wonderful Yeah, so I think with the financial crisis and collateralized debt obligations and kind of like what are kind of irresponsible innovations in a way as well and what way can you explore this in a sandbox so I think in a lot of ways what we are coming to realize is that AI affects us all, you know, across regions so in a sense, you know, what can we do to really unite across borders, you know and like this is also why kind of like these sort of joint AI regulatory sandboxes as a policy mechanisms I think were conceived of in a sense, you know, to see kind of like can there be really kind of extensive collaborations on like transport or health or like other aspects and could sort of like leading regulatory environments kind of come together to really try to dive into that and figure that out so this is kind of like part of what we are going to explore and this is also rich in data written into the AI Act itself, but also into the Interoperable Europe Act. There was kind of a mention of cross-border sandboxes. So I think we will see over the coming years this new type of experimentation. You know, and I think like what I can say right now is that that we are kind of starting to facilitate that and we will be working on the rollout of that. So kind of like any type of engagement with this always, I think, will be welcome over the coming years.
Sophie Tomlinson: Thank you. Thank you, Alex. I can see we have one question from the floor, which is great. Before we go to you, I just want to get Jimson to share some reactions to what Alex has been saying and what we’ve discussed so far, especially about business incentives when it comes to participating in sandboxes. Jimson, knowing the members of Eficta, what do you think they, what kind of questions would they have and how could you incentivize business to participate in sandboxes? Would it be, as Natalie’s saying, there would need to be, you know, perhaps funding for some SMEs? Would there be some incentive if the sandbox was actually going to be kind of cross-border in nature? So actually looking at interoperability between different African countries? Where do you see some of these questions that people have been asking?
Jimson Olufuye: Yes. Thank you very much, Sophie. The discussion has been very fluid and very useful and highly relevant. You know, to really operationalize sandboxes, you know, it requires a lot of stakeholders, a lot of interest. And importantly to the SMEs, it needs some coordination, and that’s why Eficta is there. And we are engaged in terms of creating the necessary awareness, especially in terms of members that want to create products that has countrywide and regional wide benefits. So in this regard, of course. We know that we need to fast-track development, and that is why we need all the partners in terms of funding, in terms of engagement, and in terms of appropriate regulatory directive framework, like Alex mentioned, the AI Act, and the process of bringing that together, which is quite well-established in the EU now. We really want a similar thing happening across Africa with AU, UNECA, in terms of their projects, like maybe identity projects across Africa, in terms of data structuring, so that SMEs can be involved at the initial. So meaningful participation, and then we can produce products that are highly relevant and useful for the society. Thank you.
Sophie Tomlinson: Thank you, Jimson. Could we please go to the question in the room? Thank you.
Audience: Yes. Can you hear me?
Sophie Tomlinson: Yes.
Audience: Perfectly. Okay. Hi, my name is Giovanna. I’m at IGF as part of the Brazil Youth Program. I’m one of the facilitators, and it’s been an amazing discussion. Thank you very much to Datasphere for putting this discussion together. I have a question about the exit reports, and about the documents that might be needed to be drafted during the sandbox implementation. And then asking if you have some advice for governments or other public institutions that might be setting up a sandbox, because I believe that drafting these reports will be a lot of work. And I have some concerns as to, you know, like the authorization of them specifically. Who will do it? What are the roles of the private companies, if any, in drafting those? What are the goals in actually having them not only as part of like creating a history and documenting the activity, but also to propose interpretation and paths forward? Thank you.
Sophie Tomlinson: Great question. Thank you so much. I’m going to take another comment from the floor, and then we’ll address those. Bertrand?
Audience: Thank you, Sophie. I’m Bertrand de la Chapelle, and it’s less a comment from the floor, actually, because for full disclosure, I’m attached with the Datasphere initiative. I’m the Chief Vision Officer. I just wanted to highlight and make an additional comment. There are key words that we don’t dare to use, but that are very important in this discussion. One is mistrust. And we have to recognize that in the last 20 years, a huge amount of mistrust has grown between public authorities, private actors, and civil society. Sandboxes are one of the tools that brings the capacity of dialogue, particularly when the discussions are taking place very early on. And in the mapping that the data sphere has done, we see certain countries that are using sandboxes not only for compliance verification or for pure regulatory aspects, but also to understand better between the different actors what are the parameters of a particular sector. The second thing is, the second word is anxiety. There is a little bit of anxiety about this new tool. The methodology is not completely stabilized and there is a risk. This is not the way operators are functioning. And there are questions of who is taking the lead in one organization? How is the distribution of responsibilities? And I think the work that the European Commission and the AI office in particular is doing in trying to shape how those things are going to be handled. The work that we’re doing at the data sphere through something that we launched, which is a global sandboxes forum, which is a space for exchanging practices around us, is helping in that regard. And I have here something that I’d be happy to distribute regarding the observatory that we have launched that documents all the experiences around the world on sandboxes that we’ve documented. Thank you.
Sophie Tomlinson: Thank you so much, Bertrand. Jai, I see you have your hand up. We have seven minutes left. So yeah, if you want to perhaps answer some of the questions that Giovanna put forward, that would be great or build on anything. Thanks.
Jai Ganesh Udayasankaran: Thanks, Sophie. I just wanted to quickly add what was shared by the speaker from data sphere. I think most of the times we look at sandboxes as something like, OK, regulators are the ones who own it. And then there is a kind of an entry, like a gatekeeping. But then why not look at the sandboxes in terms of being a creative or collaborative space where we actually help the entrepreneurs, because innovation is really required. And then there are funding constraints or resource constraints. I think that’s universal, irrespective. So why not we use this space as a… as an environment where there is a bit of a hand-holding and support that comes from the regulators or the governments or the academy, like how we can help those innovations that are coming in the space to actually meet the requirements or that meet the expectations in terms of trust rather than just being gatekeeping. That’s my thought. And then Aheen also uses this approach known as convergence methodology where we bring the various stakeholders within the country as well as those who are in the country.
Sophie Tomlinson: Thank you, Jai. Sorry, I just want to pause because we’ve got six minutes left and I just would love Thiago to come in to perhaps answer Giovanna’s point on the different types of exit reports. I think that’s something that I just want to make sure we answer that question and I think Thiago would have some ideas on that. Thiago, do you want to maybe share your perspective on that and as an ending comment from you as well? In one minute, if possible. Yes, of course.
Moraes Thiago: I know time is night. So, going straight to the point, I actually am finding fascinating how different regulators have been dealing deeply with the exit reports. So, in some cases, exit reports have been drafted by the companies and then many times it becomes more internal knowledge for the regulator. But in other contexts, the regulator has decided to take the lead on that. Like I can give as an example the experience in the Norwegian EPA. Also, the ICO, several times they have been the one drafting the main exit report. And then, of course, they do some assessment with the participants to be sure that there’s nothing there that’s being shared that should not be disclosed. But actually, the way these exit reports, these public ones have been published, they really cover more about the experience itself than about sensitive confidential issues, which is what the idea should be. And we also see that in the IAC proposal. So, I think it really depends on how you’re going to deal with the exit report, but there’s definitely room for flexibility here as well.
Sophie Tomlinson: Thank you, Tiago. And Natalie, I wanted to bring you in as a final kind of wrap-up thought for us since we’re quite short on time. Bertram mentioned And I think that’s something that the IGF this year, there’s a lot of, you know, trying to build trust, build and do a lot of trust building and, you know, kind of support international collaboration as much as we can. How do you think sandboxes can help build this trust at a cross-border level?
Natalie Cohen: Yeah, I think this issue of trust is key. One thing the OECD does is a driver of trust in government survey. And I think on the proportion of countries that reply to say that they have trust that governments will appropriately regulate the new technologies was only about 41%. So that shows that the trust is definitely low. I think regulatory experimentation builds the evidence base for making regulatory reform in an area where the risks are not fully understood regulatory attempts on AI are still early stage. And so a lot is being mentioned about the risk to society to the environment of AI, as well as the obvious economic and innovation benefits. And so I think it’s that collaboration element, it’s creating a space where regulators and businesses and civil society and a range of stakeholders can dialogue and actually build the evidence base together in a way that can then inform and influence a regulatory regime.
Sophie Tomlinson: Thank you. Thank you so much, Natalie. And thank you everybody for taking the time. I know a 9am session is sometimes not the easiest one to get to at the IGF, especially after an IGF music night. So thank you all so much for being here. Thank you as well for all the people who joined us online. Your time and expertise and questions shared was really valuable to us as we try to understand more about how people are thinking about regulatory experimentation, particularly sandboxes. And yeah, thank you for joining us and hope to see you all soon.
Mariana Rozo-Pan
Speech speed
150 words per minute
Speech length
727 words
Speech time
288 seconds
Sandboxes are collaborative safe spaces for experimentation where stakeholders test technologies against regulatory frameworks
Explanation
Sandboxes are collaborative spaces where different stakeholders come together to craft solutions and experiment with technologies. Regulatory sandboxes specifically allow public and private sectors, along with civil society, to test technologies against existing or developing regulatory frameworks.
Evidence
Interactive audience participation showing people associate sandboxes with collaboration and solutions; childhood Lego building analogy demonstrating flexible, experimental mindset
Major discussion point
What are AI sandboxes and their potential applications
Topics
Legal and regulatory
Sandboxes originated in fintech but now span multiple sectors including AI, health, and transportation across 150+ implementations globally
Explanation
While sandboxes were originally created within the finance sector to test financial technologies, they are now being implemented across various sectors. The DataSphere Initiative has mapped over 150 sandboxes globally focused on different topics, particularly AI innovation.
Evidence
DataSphere Initiative mapping identified over 66 sandboxes that grew to around 150; global distribution map showing sandboxes in developed and developing countries across Latin America, Asia, and Africa
Major discussion point
What are AI sandboxes and their potential applications
Topics
Legal and regulatory | Development
Sandboxes are promising tools for testing bold ideas in collaborative environments that create public value
Explanation
Sandboxes provide a methodology for testing innovative ideas in safe, collaborative spaces that go beyond benefiting specific startups or private companies. They are designed to create broader public value and translate into better technologies for society in general.
Evidence
DataSphere Initiative methodology focusing on responsible design, effective communication and engagement, and ensuring public value creation
Major discussion point
What are AI sandboxes and their potential applications
Topics
Legal and regulatory | Development
Meni Anastasiadou
Speech speed
139 words per minute
Speech length
676 words
Speech time
289 seconds
Regulatory sandboxes enable safe real-world testing of AI systems, particularly beneficial for SMEs
Explanation
Sandboxes provide a mechanism for safe, real-world testing of AI systems, which is especially valuable for small and medium enterprises. They allow businesses to test AI technologies with appropriate safeguards before full market deployment.
Evidence
Aerospace engineering analogy – engineers test airplanes on the ground before they fly to ensure safety; ICC’s four-pillar approach to AI governance framework
Major discussion point
What are AI sandboxes and their potential applications
Topics
Legal and regulatory | Economic
Agreed with
– Alex Moltzau
– Natalie Cohen
– Jimson Olufuye
Agreed on
SMEs need special support and consideration in sandbox participation
AI governance frameworks need to be harmonized, flexible, and supportive of innovation while reducing compliance complexities
Explanation
Effective AI governance requires frameworks that are aligned with existing global agreements to avoid creating a patchwork of regulations. These frameworks should be flexible enough not to hinder investment while creating favorable commercial conditions for entrepreneurship.
Evidence
ICC’s four-pillar narrative on artificial intelligence published in September; ICC represents more than 45 million businesses across 170 countries
Major discussion point
Why sandboxes are needed for AI governance
Topics
Legal and regulatory | Economic
Alex Moltzau
Speech speed
157 words per minute
Speech length
1214 words
Speech time
461 seconds
Responsible innovation requires evidence-based policy making, and sandboxes provide regulatory learning opportunities
Explanation
To achieve responsible innovation, policymakers need an evidence base to inform their decisions rather than relying on buzzwords or unfulfilled promises. Sandboxes serve as a mechanism for regulatory learning, helping regulators build competence on AI while ensuring products work as intended.
Evidence
Background as social data scientist with AI policy experience in Norway; involvement in Norwegian privacy sandbox with exclusively AI cases and published exit reports
Major discussion point
Why sandboxes are needed for AI governance
Topics
Legal and regulatory
Sandboxes help balance innovation with responsibility, ensuring products are both great and safe for citizens
Explanation
Sandboxes address the balancing act between promoting innovation and ensuring responsibility in AI development. As citizens want both great products and safe products/services, sandboxes provide a mechanism to achieve both objectives simultaneously.
Evidence
European Commission’s work on implementing act for AI regulatory sandboxes; AI regulatory sandbox subgroup under the AI board working with member states
Major discussion point
Why sandboxes are needed for AI governance
Topics
Legal and regulatory | Human rights
SME participation should be free according to EU AI Act provisions
Explanation
The EU AI Act explicitly states that participation in sandboxes for small and medium enterprises and startups should be free. This provision aims to remove financial barriers that might prevent smaller companies from participating in regulatory experimentation.
Evidence
EU AI Act explicit provisions regarding free participation for SMEs and startups
Major discussion point
Stakeholder engagement and participation
Topics
Legal and regulatory | Economic
Agreed with
– Meni Anastasiadou
– Natalie Cohen
– Jimson Olufuye
Agreed on
SMEs need special support and consideration in sandbox participation
Exit reports are crucial for dissemination and getting value from sandbox investments
Explanation
Given the costs associated with running sandboxes, exit reports and dissemination activities are essential for extracting value from the investment. These reports help involve broader stakeholders in understanding what was learned from the sandbox experience.
Evidence
EU AI Act conception of exit reports; reference to 2008 financial crisis and collateralized debt obligations as examples of irresponsible innovation
Major discussion point
Documentation and knowledge sharing
Topics
Legal and regulatory
Agreed with
– Natalie Cohen
– Moraes Thiago
– Participant 1
Agreed on
Sandboxes require significant resources and careful planning to be successful
Disagreed with
– Moraes Thiago
Disagreed on
Exit report authorship and responsibility
Cross-border sandboxes can facilitate extensive collaboration on transport, health, and other sectors between regulatory environments
Explanation
Joint AI regulatory sandboxes are conceived as policy mechanisms to enable collaboration across borders, particularly in sectors like transport and health. This approach allows leading regulatory environments to work together on common challenges.
Evidence
AI Act and Interoperable Europe Act mentions of cross-border sandboxes; European Commission facilitation of cross-border experimentation rollout
Major discussion point
Sector-specific applications and cross-border potential
Topics
Legal and regulatory | Infrastructure
Agreed with
– Jimson Olufuye
– Jai Ganesh Udayasankaran
– Sophie Tomlinson
Agreed on
Cross-border and regional cooperation enhances sandbox effectiveness
Jimson Olufuye
Speech speed
124 words per minute
Speech length
914 words
Speech time
440 seconds
AI regulation must be people-centered and inclusive, with sandboxes helping bridge the digital divide
Explanation
AI regulation should align with the vision of a people-centered, inclusive information society as envisioned by WSIS. Sandboxes can play a role in ensuring AI development serves to bridge the digital divide and achieve Sustainable Development Goals rather than create harmful products.
Evidence
AFICTA covers 43 countries in Africa; Nigeria’s AI strategy development and move toward AI law; Central Bank of Nigeria’s adoption of sandboxes in financial sector regulation
Major discussion point
Why sandboxes are needed for AI governance
Topics
Legal and regulatory | Development | Human rights
African countries are embracing experimentation for emerging technologies, with less than 10 having AI strategies currently
Explanation
While African countries are showing interest in regulatory experimentation for emerging technologies like AI, the continent is still in early stages with fewer than 10 countries having developed AI strategies. There’s a need to move from strategy development to actual AI regulation.
Evidence
Less than 10 African countries have AI strategy; Nigeria’s recent AI strategy development and progression toward AI law; AFICTA’s representation across 43 African countries
Major discussion point
Sector-specific applications and cross-border potential
Topics
Legal and regulatory | Development
Private sector coordination through organizations like AFICTA is essential for meaningful SME participation
Explanation
Organizations like AFICTA play a crucial role in coordinating private sector engagement and creating awareness among members who want to develop products with countrywide and regional benefits. This coordination is essential for operationalizing sandboxes effectively.
Evidence
AFICTA founded in 2012 with six countries, now covering 43 countries; members include ICT associations, companies, and individual professionals
Major discussion point
Stakeholder engagement and participation
Topics
Economic | Development
Agreed with
– Meni Anastasiadou
– Alex Moltzau
– Natalie Cohen
Agreed on
SMEs need special support and consideration in sandbox participation
Regional cooperation is essential for products with countrywide and regional benefits
Explanation
To fast-track development and create products that have broader impact, regional cooperation is necessary. This includes coordination between organizations, appropriate funding, engagement, and regulatory frameworks similar to what exists in the EU.
Evidence
Reference to AU, UNECA projects like identity projects across Africa; need for data structuring to enable SME involvement from initial stages
Major discussion point
Sector-specific applications and cross-border potential
Topics
Development | Legal and regulatory
Agreed with
– Alex Moltzau
– Jai Ganesh Udayasankaran
– Sophie Tomlinson
Agreed on
Cross-border and regional cooperation enhances sandbox effectiveness
Natalie Cohen
Speech speed
150 words per minute
Speech length
994 words
Speech time
397 seconds
Sandboxes require significant governance resources, clear eligibility criteria, testing frameworks, and exit strategies
Explanation
Successful sandboxes are resource-intensive and require careful planning including transparent eligibility criteria, clear testing frameworks, proper evaluation processes, and defined exit strategies. Without these elements, sandboxes can fail to achieve their objectives.
Evidence
OECD R2021 recommendation on agile regulatory governance; OECD experience providing technical assistance to fix broken sandboxes; upcoming OECD toolkit on sandbox development
Major discussion point
Implementation challenges and resource considerations
Topics
Legal and regulatory
Agreed with
– Moraes Thiago
– Participant 1
– Alex Moltzau
Agreed on
Sandboxes require significant resources and careful planning to be successful
Governments need to consider funding support for SMEs and avoid creating market distortions while providing participation incentives
Explanation
Successful sandboxes require balancing act between providing incentives for business participation without creating unfair market advantages. SMEs often need additional support including funding, data access, and legal/compliance resources to participate effectively.
Evidence
OECD observation that some successful sandboxes primarily benefited larger corporates rather than SMEs; need for diverse and sustainable approach to sandboxes
Major discussion point
Implementation challenges and resource considerations
Topics
Legal and regulatory | Economic
Agreed with
– Meni Anastasiadou
– Alex Moltzau
– Jimson Olufuye
Agreed on
SMEs need special support and consideration in sandbox participation
Disagreed with
– Jai Ganesh Udayasankaran
Disagreed on
Primary purpose and framing of sandboxes
Only 41% of countries trust governments to appropriately regulate new technologies, showing need for evidence-based collaboration
Explanation
OECD research shows low levels of public trust in government’s ability to regulate new technologies appropriately. This trust deficit highlights the importance of collaborative approaches like sandboxes that bring together multiple stakeholders to build evidence-based regulatory approaches.
Evidence
OECD driver of trust in government survey showing only 41% trust in appropriate regulation of new technologies
Major discussion point
Trust building and collaboration
Topics
Legal and regulatory | Human rights
Regulatory experimentation builds evidence base for reform in areas where risks are not fully understood
Explanation
In emerging technology areas like AI where risks are not fully understood and regulatory attempts are still early stage, regulatory experimentation provides a collaborative space for building the evidence base needed to inform regulatory reform. This addresses both the potential risks and obvious benefits of technologies like AI.
Evidence
OECD focus on regulatory experimentation as part of agile regulatory governance; recognition that AI regulatory attempts are still early stage
Major discussion point
Trust building and collaboration
Topics
Legal and regulatory
Moraes Thiago
Speech speed
137 words per minute
Speech length
983 words
Speech time
428 seconds
Civil society and individuals affected by AI solutions need meaningful roles before, during, and after sandbox implementation
Explanation
When sandboxing AI solutions, it’s important to consider that individuals will be affected regardless of whether their personal data is processed. Civil society and affected individuals should have meaningful participation throughout the entire sandbox process, not just as an afterthought.
Evidence
Current PhD research on the role of civil society in sandboxes; experience as practitioner at Brazilian Data Protection Authority working on pilot sandbox launch
Major discussion point
Stakeholder engagement and participation
Topics
Legal and regulatory | Human rights
Resource limitations can be addressed through pilot approaches, partnerships, and international cooperation
Explanation
When regulators face resource constraints, they can start by ‘sandboxing the sandbox’ through pilot programs. Partnerships with international institutions, development banks, and other organizations can provide funding and capacity building support to overcome resource limitations.
Evidence
Examples of institutions like CAF, CEPAL supporting sandbox initiatives; Brazilian Data Protection Authority’s pilot sandbox development; common practice of creating pilots before full sandboxes
Major discussion point
Implementation challenges and resource considerations
Topics
Legal and regulatory | Development
Agreed with
– Natalie Cohen
– Participant 1
– Alex Moltzau
Agreed on
Sandboxes require significant resources and careful planning to be successful
Exit report authorship varies between companies and regulators, with flexibility in approach depending on context
Explanation
Different regulators handle exit reports differently – some have companies draft them for internal use, while others like Norwegian EPA and ICO take the lead in drafting public reports. The approach depends on the regulator’s strategy and the intended use of the reports.
Evidence
Examples from Norwegian EPA and ICO where regulators drafted main exit reports; variation in practices across different jurisdictions; EU AI Act proposal allowing flexibility
Major discussion point
Documentation and knowledge sharing
Topics
Legal and regulatory
Disagreed with
– Alex Moltzau
Disagreed on
Exit report authorship and responsibility
Public exit reports focus on experience sharing rather than sensitive confidential information
Explanation
When exit reports are made public, they typically focus on sharing the sandbox experience and lessons learned rather than disclosing sensitive or confidential business information. This approach allows for knowledge sharing while protecting participant interests.
Evidence
Analysis of published exit reports showing focus on experience rather than sensitive information; regulatory assessment processes to ensure appropriate disclosure levels
Major discussion point
Documentation and knowledge sharing
Topics
Legal and regulatory
Participant 1
Speech speed
155 words per minute
Speech length
1038 words
Speech time
399 seconds
Funding challenges exist, with potential solutions including cost-sharing models between affected sectors
Explanation
African regulators are grappling with how to fund sandboxes, as core operational funding for experimentation is often not available. One solution being explored is cost-sharing models where multiple regulators from different sectors that would benefit from a sandbox contribute to its funding.
Evidence
25 national sandboxes identified in Africa with 24 in finance sector; co-creation activities in Africa exploring cost-sharing between regulators from different affected sectors
Major discussion point
Implementation challenges and resource considerations
Topics
Legal and regulatory | Development
Agreed with
– Natalie Cohen
– Moraes Thiago
– Alex Moltzau
Agreed on
Sandboxes require significant resources and careful planning to be successful
Legal backing for sandboxing authority is often unclear and needs to be established
Explanation
Many regulators want to establish sandboxes but are uncertain whether they have the legal authority to do so. This creates a challenge where regulators need to find legal backing for experimentation or work to establish such authority if it doesn’t exist.
Evidence
Feedback from co-creation labs showing regulators questioning their legal authority to sandbox; common challenge identified across multiple jurisdictions in Africa
Major discussion point
Implementation challenges and resource considerations
Topics
Legal and regulatory
Jai Ganesh Udayasankaran
Speech speed
151 words per minute
Speech length
906 words
Speech time
358 seconds
Health sector sandboxes address universal health coverage, interoperability standards, and cross-border data sharing needs
Explanation
In the health sector, sandboxes are being used to help private sector applications integrate into national mainstream systems, ensure interoperability with existing standards, and facilitate responsible cross-border health data sharing for medical tourism and treatment abroad.
Evidence
Asia eHealth Information Network representation from 15 countries with 2,600+ members across 84 countries; examples of universal health coverage programs using sandboxes; medical tourism data sharing needs
Major discussion point
Sector-specific applications and cross-border potential
Topics
Legal and regulatory | Development
Agreed with
– Alex Moltzau
– Jimson Olufuye
– Sophie Tomlinson
Agreed on
Cross-border and regional cooperation enhances sandbox effectiveness
Sandboxes should function as collaborative spaces with hand-holding support rather than just gatekeeping
Explanation
Instead of viewing sandboxes primarily as regulatory gatekeeping mechanisms, they should be seen as creative collaborative spaces that provide hand-holding support to entrepreneurs. This approach helps innovations meet regulatory requirements and expectations while fostering trust.
Evidence
Asia eHealth Information Network’s convergence methodology bringing various stakeholders together; universal resource constraints across jurisdictions
Major discussion point
Stakeholder engagement and participation
Topics
Legal and regulatory | Development
Agreed with
– Mariana Rozo-Pan
– Meni Anastasiadou
– Alex Moltzau
– Natalie Cohen
Agreed on
Sandboxes are collaborative spaces that bring together multiple stakeholders for experimentation
Disagreed with
– Natalie Cohen
Disagreed on
Primary purpose and framing of sandboxes
Sophie Tomlinson
Speech speed
156 words per minute
Speech length
2417 words
Speech time
926 seconds
Sandboxes are being explored globally as tools for digital policy challenges and new technologies
Explanation
Sandboxes for digital policy challenges and new technologies aren’t limited to Europe but are being explored worldwide. This includes implementations in Asia (notably Singapore), Latin America (Brazil, Chile), and Africa, demonstrating the global nature of this regulatory experimentation approach.
Evidence
DataSphere Initiative’s global mapping work; Africa Sandboxes Forum; co-creation lab in Abuja, Nigeria as part of African Data Protection Conference; Nigeria developing sandbox for data protection law compliance
Major discussion point
What are AI sandboxes and their potential applications
Topics
Legal and regulatory | Development
Agreed with
– Alex Moltzau
– Jimson Olufuye
– Jai Ganesh Udayasankaran
Agreed on
Cross-border and regional cooperation enhances sandbox effectiveness
The DataSphere Initiative provides comprehensive support for sandbox development including training and capacity building
Explanation
The DataSphere Initiative offers various forms of support for organizations wanting to develop sandboxes, including co-creation activities, one-on-one coaching journeys, and master classes. This support addresses the recognized need for sandboxes while helping navigate implementation challenges around resources and technical requirements.
Evidence
DataSphere Initiative’s work as think-do-tank on data governance and sandboxes; workshop series at IGF; QR code for interactive participation; diverse panel of speakers from multiple sectors and regions
Major discussion point
Implementation challenges and resource considerations
Topics
Legal and regulatory | Development
Audience
Speech speed
144 words per minute
Speech length
474 words
Speech time
196 seconds
Exit reports require careful consideration of authorship, roles, and documentation goals
Explanation
There are important questions about who should draft exit reports from sandboxes, what roles private companies should play in their creation, and how to balance documentation with proposing interpretations and paths forward. The concern is that drafting comprehensive reports will require significant work and clear authorization processes.
Evidence
Question from Brazil Youth Program facilitator about exit report documentation and authorization processes
Major discussion point
Documentation and knowledge sharing
Topics
Legal and regulatory
Countries need guidance on policy packages and integration approaches for AI and data sandboxes
Explanation
There are important design questions about whether AI and data sandboxes should be structured separately or integrated, and what legislative or regulatory features make participation more accessible to businesses, especially SMEs. This reflects the need for clearer frameworks on sandbox architecture and accessibility.
Evidence
Question from Institute for Policy Studies and Media Development in Vietnam about policy package design and SME accessibility
Major discussion point
Implementation challenges and resource considerations
Topics
Legal and regulatory | Economic
Agreements
Agreement points
Sandboxes are collaborative spaces that bring together multiple stakeholders for experimentation
Speakers
– Mariana Rozo-Pan
– Meni Anastasiadou
– Alex Moltzau
– Natalie Cohen
– Jai Ganesh Udayasankaran
Arguments
Sandboxes are collaborative spaces, safe spaces for collaboration in which, by nature, different stakeholders come together to craft solutions, experiment with technologies
Regulatory sandboxes are really a great tool that actually respond to this framework of governance and they can really enable the safe and real-world testing of AI systems
Sandboxes provide a collaborative space for building the evidence base needed to inform regulatory reform
Regulatory experimentation builds the evidence base for making regulatory reform in a way that can then inform and influence a regulatory regime
Sandboxes should function as collaborative spaces with hand-holding support rather than just gatekeeping
Summary
All speakers agree that sandboxes fundamentally serve as collaborative platforms where diverse stakeholders (public sector, private sector, civil society) come together to experiment with technologies and build evidence for regulatory decision-making
Topics
Legal and regulatory | Development
SMEs need special support and consideration in sandbox participation
Speakers
– Meni Anastasiadou
– Alex Moltzau
– Natalie Cohen
– Jimson Olufuye
Arguments
Regulatory sandboxes enable safe real-world testing of AI systems, particularly beneficial for SMEs
SME participation should be free according to EU AI Act provisions
Governments need to consider funding support for SMEs and avoid creating market distortions while providing participation incentives
Private sector coordination through organizations like AFICTA is essential for meaningful SME participation
Summary
There is strong consensus that small and medium enterprises face unique challenges in participating in sandboxes and require targeted support including free participation, funding assistance, and coordinated engagement through representative organizations
Topics
Legal and regulatory | Economic
Sandboxes require significant resources and careful planning to be successful
Speakers
– Natalie Cohen
– Moraes Thiago
– Participant 1
– Alex Moltzau
Arguments
Sandboxes require significant governance resources, clear eligibility criteria, testing frameworks, and exit strategies
Resource limitations can be addressed through pilot approaches, partnerships, and international cooperation
Funding challenges exist, with potential solutions including cost-sharing models between affected sectors
Exit reports are crucial for dissemination and getting value from sandbox investments
Summary
All speakers acknowledge that sandboxes are resource-intensive endeavors requiring careful planning, adequate funding, clear frameworks, and proper documentation to achieve their objectives
Topics
Legal and regulatory | Development
Cross-border and regional cooperation enhances sandbox effectiveness
Speakers
– Alex Moltzau
– Jimson Olufuye
– Jai Ganesh Udayasankaran
– Sophie Tomlinson
Arguments
Cross-border sandboxes can facilitate extensive collaboration on transport, health, and other sectors between regulatory environments
Regional cooperation is essential for products with countrywide and regional benefits
Health sector sandboxes address universal health coverage, interoperability standards, and cross-border data sharing needs
Sandboxes are being explored globally as tools for digital policy challenges and new technologies
Summary
Speakers agree that sandboxes become more effective when they operate across borders and regions, enabling broader collaboration and addressing shared challenges in sectors like health and transport
Topics
Legal and regulatory | Infrastructure | Development
Similar viewpoints
Both speakers emphasize the critical need for evidence-based approaches to AI regulation and the role of sandboxes in building this evidence base, particularly given low public trust in government regulation of new technologies
Speakers
– Alex Moltzau
– Natalie Cohen
Arguments
Responsible innovation requires evidence-based policy making, and sandboxes provide regulatory learning opportunities
Only 41% of countries trust governments to appropriately regulate new technologies, showing need for evidence-based collaboration
Topics
Legal and regulatory | Human rights
Both speakers advocate for more inclusive and supportive approaches to sandboxes that go beyond regulatory gatekeeping to provide meaningful participation opportunities for affected communities and entrepreneurs
Speakers
– Moraes Thiago
– Jai Ganesh Udayasankaran
Arguments
Civil society and individuals affected by AI solutions need meaningful roles before, during, and after sandbox implementation
Sandboxes should function as collaborative spaces with hand-holding support rather than just gatekeeping
Topics
Legal and regulatory | Human rights | Development
Both speakers from African contexts highlight the need for inclusive approaches to AI regulation and the practical challenges of establishing regulatory frameworks in developing economies
Speakers
– Jimson Olufuye
– Participant 1
Arguments
AI regulation must be people-centered and inclusive, with sandboxes helping bridge the digital divide
Legal backing for sandboxing authority is often unclear and needs to be established
Topics
Legal and regulatory | Development | Human rights
Unexpected consensus
Civil society participation in sandboxes
Speakers
– Moraes Thiago
– Alex Moltzau
– Jai Ganesh Udayasankaran
Arguments
Civil society and individuals affected by AI solutions need meaningful roles before, during, and after sandbox implementation
Exit reports are crucial for dissemination and getting value from sandbox investments
Sandboxes should function as collaborative spaces with hand-holding support rather than just gatekeeping
Explanation
Despite representing different regions and institutional perspectives (academic researcher, EU policy maker, Asian health network), there was unexpected alignment on the need for meaningful civil society engagement throughout the sandbox process, not just as beneficiaries but as active participants
Topics
Legal and regulatory | Human rights
Trust-building as a core function of sandboxes
Speakers
– Natalie Cohen
– Bertrand de la Chapelle
– Alex Moltzau
Arguments
Only 41% of countries trust governments to appropriately regulate new technologies, showing need for evidence-based collaboration
Sandboxes are one of the tools that brings the capacity of dialogue, particularly when the discussions are taking place very early on
Sandboxes help balance innovation with responsibility, ensuring products are both great and safe for citizens
Explanation
There was unexpected consensus across different institutional perspectives that sandboxes serve a crucial trust-building function between stakeholders, addressing broader societal mistrust in technology governance beyond just regulatory compliance
Topics
Legal and regulatory | Human rights
Overall assessment
Summary
The discussion revealed remarkably high consensus among speakers from diverse geographical and institutional backgrounds on fundamental aspects of AI sandboxes. Key areas of agreement included the collaborative nature of sandboxes, the need for special SME support, resource requirements, and the value of cross-border cooperation. There was also unexpected alignment on the importance of civil society participation and trust-building functions.
Consensus level
High consensus with strong implications for global sandbox development. The alignment suggests that despite different regulatory contexts, there are universal principles and challenges in sandbox implementation. This consensus provides a solid foundation for international cooperation and knowledge sharing in AI governance, while highlighting the need for coordinated approaches to address common challenges like resource constraints and stakeholder engagement.
Differences
Different viewpoints
Exit report authorship and responsibility
Speakers
– Moraes Thiago
– Alex Moltzau
Arguments
Exit report authorship varies between companies and regulators, with flexibility in approach depending on context
Exit reports are crucial for dissemination and getting value from sandbox investments
Summary
Thiago emphasizes flexibility in who drafts exit reports (companies vs regulators) with examples showing different approaches, while Alex focuses on the importance of exit reports for dissemination and stakeholder involvement, suggesting a more structured approach to maximize investment value
Topics
Legal and regulatory
Primary purpose and framing of sandboxes
Speakers
– Jai Ganesh Udayasankaran
– Natalie Cohen
Arguments
Sandboxes should function as collaborative spaces with hand-holding support rather than just gatekeeping
Governments need to consider funding support for SMEs and avoid creating market distortions while providing participation incentives
Summary
Jai advocates for sandboxes as supportive, collaborative environments that help entrepreneurs meet requirements, while Natalie emphasizes the need for careful balance to avoid market distortions and the resource-intensive nature of proper sandbox governance
Topics
Legal and regulatory | Development
Unexpected differences
Resource allocation and funding responsibility
Speakers
– Participant 1
– Natalie Cohen
– Moraes Thiago
Arguments
Funding challenges exist, with potential solutions including cost-sharing models between affected sectors
Governments need to consider funding support for SMEs and avoid creating market distortions while providing participation incentives
Resource limitations can be addressed through pilot approaches, partnerships, and international cooperation
Explanation
While all speakers acknowledge resource constraints, they propose different solutions that could potentially conflict. The African perspective suggests cost-sharing between regulators, OECD emphasizes government funding responsibility, and the Brazilian perspective focuses on international partnerships. This disagreement is unexpected because it reveals different regional approaches to the same fundamental challenge
Topics
Legal and regulatory | Development
Overall assessment
Summary
The discussion shows remarkable consensus on the value and potential of AI sandboxes, with disagreements primarily focused on implementation details rather than fundamental concepts. Key areas of disagreement include exit report management, the balance between support and market neutrality, and funding mechanisms.
Disagreement level
Low to moderate disagreement level. The speakers largely agree on goals but differ on methods and emphasis. This suggests a maturing field where practitioners are working through operational details rather than debating fundamental principles. The implications are positive – there’s broad consensus on the value of sandboxes, but more work is needed on standardizing best practices and addressing regional variations in implementation approaches.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers emphasize the critical need for evidence-based approaches to AI regulation and the role of sandboxes in building this evidence base, particularly given low public trust in government regulation of new technologies
Speakers
– Alex Moltzau
– Natalie Cohen
Arguments
Responsible innovation requires evidence-based policy making, and sandboxes provide regulatory learning opportunities
Only 41% of countries trust governments to appropriately regulate new technologies, showing need for evidence-based collaboration
Topics
Legal and regulatory | Human rights
Both speakers advocate for more inclusive and supportive approaches to sandboxes that go beyond regulatory gatekeeping to provide meaningful participation opportunities for affected communities and entrepreneurs
Speakers
– Moraes Thiago
– Jai Ganesh Udayasankaran
Arguments
Civil society and individuals affected by AI solutions need meaningful roles before, during, and after sandbox implementation
Sandboxes should function as collaborative spaces with hand-holding support rather than just gatekeeping
Topics
Legal and regulatory | Human rights | Development
Both speakers from African contexts highlight the need for inclusive approaches to AI regulation and the practical challenges of establishing regulatory frameworks in developing economies
Speakers
– Jimson Olufuye
– Participant 1
Arguments
AI regulation must be people-centered and inclusive, with sandboxes helping bridge the digital divide
Legal backing for sandboxing authority is often unclear and needs to be established
Topics
Legal and regulatory | Development | Human rights
Takeaways
Key takeaways
AI sandboxes are collaborative safe spaces that enable stakeholders to test technologies against regulatory frameworks, with over 150 implementations globally spanning multiple sectors beyond their fintech origins
Sandboxes serve as crucial tools for balancing innovation with responsibility, providing evidence-based regulatory learning while ensuring AI products are both innovative and safe for citizens
Successful sandbox implementation requires significant resources, clear governance structures, transparent eligibility criteria, and well-defined exit strategies
SME participation is critical and should be supported through free participation models and funding assistance to avoid market distortions while ensuring inclusive innovation
Civil society engagement is essential throughout the sandbox lifecycle, as individuals affected by AI solutions need meaningful representation before, during, and after implementation
Cross-border collaboration through sandboxes can address global AI governance challenges, particularly in sectors like health, transport, and data sharing
Trust-building between public authorities, private sector, and civil society is a fundamental benefit of sandboxes, addressing widespread mistrust in technology governance
Exit reports and knowledge dissemination are crucial for maximizing value from sandbox investments and informing broader regulatory frameworks
Resolutions and action items
EU AI Office to release draft Implementing Act for AI regulatory sandboxes for public comment in autumn
OECD to publish toolkit on sandbox development and design in coming weeks
Brazilian Data Protection Authority to launch pilot sandbox with news expected soon
DataSphere Initiative to continue offering one-on-one coaching, master classes, and co-creation activities for sandbox development
Participants encouraged to engage with EU’s public consultation process when the draft Implementing Act is released
Continued collaboration between DataSphere Initiative and Asia eHealth Information Network on health sector sandboxes
Unresolved issues
Whether AI and data sandboxes should be structured separately or integrated remains unclear
Legal backing for sandboxing authority is often unclear and needs to be established in many jurisdictions
Funding models and resource allocation strategies for sandboxes, particularly in developing countries, require further development
The specific role and meaningful participation mechanisms for civil society throughout the sandbox process need better definition
Standardization of exit report formats and authorship responsibilities across different regulatory contexts
How to effectively measure and evaluate the real-world impact and value creation of AI sandboxes
Balancing transparency requirements with protection of commercially sensitive information in sandbox operations
Suggested compromises
Cost-sharing models between different regulatory sectors that benefit from sandbox outcomes to address funding constraints
Pilot or ‘sandbox the sandbox’ approaches to test waters with limited resources before full implementation
Flexible duration models ranging from 3 months to 5 years depending on testing objectives and available resources
Hybrid sandbox models combining regulatory and operational elements to maximize utility
Free participation for SMEs while larger corporates contribute to funding sustainability
Collaborative exit report development between regulators and participants with appropriate confidentiality protections
Regional cooperation frameworks to share costs and benefits of cross-border sandbox initiatives
Thought provoking comments
We often forget how we used to play when we were kids. And as we were children growing up, we were actually quite excited about experimenting and about thinking about building things, building them, and then kind of destroying them and building something new again. And that flexible, agile mindset that maybe we had when we were children is what we’re often lacking when it comes to building agile regulations and shaping how we’re governing technologies.
Speaker
Mariana Rozo-Pan
Reason
This comment reframes regulatory innovation through a powerful metaphor that makes the abstract concept of sandboxes tangible and relatable. It challenges the traditional rigid approach to regulation by connecting it to universal human experience of creative play and experimentation.
Impact
This opening metaphor set the collaborative and experimental tone for the entire discussion. It influenced how other speakers framed their contributions, with many referring back to concepts of experimentation, collaboration, and safe spaces throughout the session.
Sometimes what SMEs need support with is part of accessing, could be accessing data, it could be legal and compliance resources as well… Some successful sandboxes have been successful in terms of testing products and services and bringing them to market, but they’ve been primarily successful with larger corporates.
Speaker
Natalie Cohen
Reason
This comment introduces a critical equity dimension to sandbox design, highlighting how these supposedly democratizing tools might actually reinforce existing power imbalances between large corporations and SMEs.
Impact
This observation shifted the discussion from purely technical implementation questions to broader questions of accessibility and fairness. It prompted subsequent speakers like Alex Moltzau to address how the EU AI Act specifically mandates free participation for SMEs, and influenced Jimson’s comments about the need for coordination and support.
Civil society and individuals, they might have an important role before, during and after the sandbox is done… We’re talking about individuals that are having their personal data processed or that will be affected by these AI solutions, regardless if the personal data has been processed or not.
Speaker
Moraes Thiago
Reason
This comment challenges the typical stakeholder model of sandboxes by highlighting a significant gap – the meaningful inclusion of those most affected by AI systems. It raises fundamental questions about democratic participation in technology governance.
Impact
This provocation introduced a new dimension to the conversation that hadn’t been adequately addressed. It influenced later discussions about trust-building and prompted Bertrand’s comment about mistrust between stakeholders, while also connecting to broader IGF themes about inclusive governance.
There are key words that we don’t dare to use, but that are very important in this discussion. One is mistrust… And we have to recognize that in the last 20 years, a huge amount of mistrust has grown between public authorities, private actors, and civil society. Sandboxes are one of the tools that brings the capacity of dialogue.
Speaker
Bertrand de la Chapelle
Reason
This comment directly addresses the elephant in the room – the underlying trust deficit that makes regulatory innovation necessary. By naming ‘mistrust’ and ‘anxiety’ as key but unspoken factors, it reframes sandboxes not just as technical tools but as trust-building mechanisms.
Impact
This intervention fundamentally shifted the conversation’s framing from technical implementation to the deeper social and political context. It prompted Natalie’s closing comment about the OECD trust survey showing only 41% of people trust governments to appropriately regulate new technologies, providing empirical support for Bertrand’s observation.
Why not look at the sandboxes in terms of being a creative or collaborative space where we actually help the entrepreneurs… rather than just being gatekeeping… why not we use this space as an environment where there is a bit of a hand-holding and support that comes from the regulators or the governments or the academy.
Speaker
Jai Ganesh Udayasankaran
Reason
This comment challenges the traditional regulatory paradigm by proposing a shift from gatekeeping to nurturing. It reframes the regulator-industry relationship from adversarial to collaborative, suggesting sandboxes as spaces for capacity building rather than just compliance testing.
Impact
This perspective added a constructive dimension to the discussion about regulatory approaches, moving beyond the compliance-focused view to consider how sandboxes could actively support innovation while ensuring responsible development.
We learned that it’s actually one thing that regulators grapple with. Sometimes it’s not clear that they actually are authorities allowed to sandbox in any way. So, how do they look to find that legal backing to carry out such an experimentation?
Speaker
Participant 1 (Maureen)
Reason
This comment reveals a fundamental practical barrier that challenges assumptions about regulatory authority and flexibility. It highlights how existing legal frameworks may not accommodate experimental approaches, creating a chicken-and-egg problem for regulatory innovation.
Impact
This observation grounded the discussion in practical realities facing regulators, particularly in developing countries. It influenced the conversation about resource constraints and the need for legal framework adaptation to support experimental governance approaches.
Overall assessment
These key comments collectively transformed what could have been a technical discussion about sandbox implementation into a nuanced exploration of the social, political, and structural challenges of regulatory innovation. The progression from Mariana’s playful metaphor through increasingly complex considerations of equity, inclusion, trust, and legal authority created a comprehensive framework for understanding sandboxes not just as tools, but as mechanisms for reimagining the relationship between innovation and governance. The comments built upon each other to reveal sandboxes as both promising solutions and reflections of deeper systemic challenges in technology governance, ultimately framing them as trust-building exercises in an era of widespread institutional skepticism.
Follow-up questions
What will be the role of civil society throughout all the sandbox experience – before, during and after the sandbox is done?
Speaker
Moraes Thiago
Explanation
This is identified as an important gap in current sandbox frameworks, especially when dealing with AI solutions that affect individuals and their personal data, requiring better inclusion of civil society voices
How do countries design policy packages and govern sandboxes? Should AI and data sandboxes be structured separately or integrated?
Speaker
Representative from Institute for Policy Studies and Media Development in Vietnam
Explanation
This addresses fundamental design questions about sandbox architecture and whether different technology domains should be handled together or separately
What types of legislative or regulatory features have proved the most effective in making participation more accessible to businesses and especially SMEs?
Speaker
Representative from Institute for Policy Studies and Media Development in Vietnam
Explanation
This focuses on practical implementation challenges and ensuring inclusive participation across different business sizes
How can cost-sharing models work between different regulators or sectors when setting up sandboxes?
Speaker
Maureen (Africa Sandboxes Forum Lead)
Explanation
This addresses resource constraints by exploring collaborative funding approaches when sandboxes affect multiple sectors or regulatory domains
What legal models can authorities use to establish sandboxes when they lack clear legal backing?
Speaker
Maureen (Africa Sandboxes Forum Lead)
Explanation
Many regulators want to establish sandboxes but are uncertain about their legal authority to do so, requiring clarification of legal frameworks
How can cross-border AI regulatory sandboxes be effectively implemented and what collaboration mechanisms are needed?
Speaker
Alex Moltzau
Explanation
This explores the potential for international cooperation through joint sandboxes, particularly relevant for AI systems that operate across borders
What are the specific roles and responsibilities of private companies versus public institutions in drafting exit reports and documentation?
Speaker
Giovanna (Brazil Youth Program)
Explanation
This addresses practical implementation questions about documentation responsibilities and ensuring proper knowledge transfer from sandbox experiences
How can sandboxes be used as collaborative spaces for hand-holding and support rather than just gatekeeping?
Speaker
Jai Ganesh Udayasankaran
Explanation
This suggests a shift in sandbox philosophy from regulatory compliance checking to more supportive innovation facilitation
How can sandboxes help build trust at a cross-border level between different stakeholders?
Speaker
Sophie Tomlinson
Explanation
Given low trust levels in government regulation of new technologies (only 41% according to OECD), understanding how sandboxes can build international trust is crucial
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Day 0 Event #248 No One Left Behind Digital Inclusion As a Human Right in the Global Digital Age
Day 0 Event #248 No One Left Behind Digital Inclusion As a Human Right in the Global Digital Age
Session at a glance
Summary
This discussion at the Internet Governance Forum focused on digital inclusion and addressing the global digital divide through international collaboration between policymakers, researchers, and industry experts. The panel explored how to ensure no one is left behind as society becomes increasingly digital, examining barriers to meaningful connectivity beyond basic internet coverage.
Norwegian Minister Osmund Grever-Alqvist emphasized that while 92% of the planet has internet coverage, one-third of the population remains offline due to barriers including infrastructure gaps, affordability issues, and digital illiteracy. He highlighted the importance of digital public infrastructure and open-source solutions in fostering inclusive development. Malin Rygg from Norway’s accessibility authority presented a framework addressing three core dimensions: connectivity, accessibility, and digital skills, noting that 1.3 billion people worldwide live with disabilities and require inclusive design from the outset.
Maja Brynteson discussed the “Nordic and Baltic paradox,” where highly digitalized societies risk deepening inequality as digital participation becomes essential for daily life. She identified vulnerable groups including older adults, people with disabilities, immigrants, and rural communities, emphasizing that digital exclusion is multidimensional and context-specific. Irene Mbari-Kirika from Kenya showcased African innovation in assistive technology, highlighting the continent’s mobile-first approach and the development of Kenya’s ICT accessibility standard.
The panelists agreed that digital inclusion must be framed as a human rights issue, requiring comprehensive legislation, technical standards, and enforcement mechanisms. They stressed the need for universal design principles, multi-stakeholder collaboration, and capacity building in developing countries. The discussion concluded that without closing digital gaps, societies face increased inequality and exclusion from essential services, education, and economic opportunities.
Keypoints
## Major Discussion Points:
– **The Digital Divide Paradox**: Despite high digitalization rates in Nordic/Baltic countries (92% global internet coverage), one-third of the global population remains offline, with meaningful connectivity being the key challenge rather than just coverage access
– **Digital Inclusion as a Human Rights Issue**: The discussion emphasized framing digital access not as a convenience or consumer choice, but as fundamental to human rights including education, employment, healthcare, and democratic participation
– **Multi-dimensional Barriers to Digital Inclusion**: Participants identified interconnected obstacles including connectivity issues, accessibility challenges, digital skills gaps, and the need for inclusive design from the start rather than retrofitting solutions
– **Global South Innovation and Standards**: Strong emphasis on Africa and developing countries as creators and innovators rather than just consumers of technology, with Kenya’s ICT accessibility standard highlighted as a model for global adoption
– **Policy and Regulatory Frameworks**: Discussion of effective strategies including legislation with clear obligations and deadlines, technical standards alignment across countries, and the need for enforcement mechanisms to drive meaningful change
## Overall Purpose:
The discussion aimed to foster international collaboration on digital inclusion by bringing together policymakers, researchers, and industry experts to explore practical solutions for closing the digital divide. The goal was to share successful examples from different regions (Nordic countries and Global South) and generate actionable insights for ensuring no one is left behind in an increasingly digital world.
## Overall Tone:
The discussion maintained a consistently constructive and collaborative tone throughout. It began with formal presentations but evolved into an engaged, solution-oriented dialogue. The tone was notably inclusive and respectful, with speakers building on each other’s points rather than debating. There was a sense of urgency about addressing digital exclusion, balanced with optimism about innovative solutions and international cooperation. The conversation remained professional yet passionate, reflecting the speakers’ genuine commitment to digital inclusion as both a practical necessity and moral imperative.
Speakers
**Speakers from the provided list:**
– **Fredrik Matheson** – Moderator of the session
– **Asmund Grover Aukrust** – Norwegian Minister of International Development, responsible for international development in countries outside the OSCE, Middle East and Afghanistan, and Norwegian humanitarian efforts
– **Inmaculada Porrero** – Senior expert on disability at the European Commission, leading work on the rights of people with disabilities with a focus on accessibility and assistive technologies since 1991
– **Yu Ping Chan** – Senior program officer in the Office of the Secretary General’s Envoy on Technology at the United Nations, coordinates work on follow-up of the Secretary General’s roadmap for digital cooperation and digital-related aspects of the Common Agenda Report
– **Dan Sjoblom** – Director General of the Swedish Post and Telecom Authority, appointed by the Swedish government in 2017, previously chairperson of BEREC (Body of European Regulators for Electronic Communications)
– **Irene Mbari-Kirika** – Founder and executive director of Enable, recognized dynamic global strategic leader and executive level innovator bringing assistive technologies and key legislation to Kenya for making digital accessible for all
– **Malin Rygg** – Head of Norway’s accessibility watchdog, the authority for universal design of ICT, working to transform it into a data-driven powerhouse
– **Maja Brynteson** – Research fellow at Nordregio, background in sustainable development and management studies
**Additional speakers:**
None identified – all speakers in the transcript were included in the provided speakers names list.
Full session report
# Comprehensive Report: Digital Inclusion and the Global Digital Divide – Internet Governance Forum Discussion
## Executive Summary
This Internet Governance Forum panel brought together international policymakers, researchers, and industry experts to address the critical challenge of digital inclusion in an increasingly connected world. Moderated by Fredrik Matheson, the discussion featured Norwegian Minister of International Development Asmund Grover Aukrust, European Commission disability expert Inmaculada Porrero (joining via Zoom), UN Development Program officer Yu Ping Chan, Swedish telecommunications regulator Dan Sjoblom, Kenyan accessibility advocate Irene Mbari-Kirika, Norwegian accessibility authority head Malin Rygg, and Nordregio researcher Maja Brynteson.
The panel explored how despite 92% global internet coverage, one-third of the world’s population remains offline, highlighting that meaningful connectivity extends far beyond basic infrastructure access. The discussion evolved from formal presentations into an engaged dialogue that reframed digital inclusion from a service delivery issue to a fundamental human rights imperative with significant economic implications. Key themes included the shift from “digital divide” to “meaningful connectivity,” the importance of designing accessibility from the start, and the potential for Global South innovation to lead global solutions.
## The Digital Divide Paradox: From Coverage to Meaningful Connectivity
### Global Connectivity Challenges
Minister Asmund Grover Aukrust opened by presenting a stark reality: whilst 92% of the planet has internet coverage, approximately one-third of the global population remains offline. This disconnect reveals that meaningful connectivity involves complex barriers beyond infrastructure gaps, including affordability, digital literacy, lack of relevant local content, and accessibility issues.
The minister emphasized that digital inclusion fundamentally concerns human rights—access to information, freedom of expression, education, and democratic participation. He highlighted the importance of digital public infrastructure and open-source solutions, noting that such approaches encourage competition, foster innovation, and generate spillover effects across society. Norway’s “Alt-in” system for government-to-people interaction exemplifies how countries can build inclusive digital infrastructure.
### The Nordic-Baltic Paradox
Maja Brynteson from Nordregio introduced the “Nordic-Baltic paradox”—the phenomenon where highly digitalised societies can paradoxically deepen digital divides as they advance. As societies become more digital, expectations rise and analogue alternatives disappear, potentially leaving behind those who cannot keep pace with technological change.
Brynteson’s research identifies digital exclusion as multidimensional and context-specific, often affecting overlapping vulnerable groups. She outlined two primary categories of barriers: access barriers (infrastructure and device affordability) and capability barriers (digital skills, literacy, and trust issues). Vulnerable groups include older adults, people with disabilities, immigrants, rural communities, and low-income populations, though specific challenges vary considerably by context.
Importantly, Brynteson emphasized that “not everyone can or wants to be digital,” arguing that maintaining analogue services and alternative options remains essential even in highly digitalised societies.
## Digital Inclusion as a Human Rights Imperative
### Fundamental Rights Framework
Multiple speakers converged on framing digital inclusion as a fundamental human rights issue. Malin Rygg from Norway’s accessibility authority emphasized that digital services are now inextricably linked to basic rights including education, employment, healthcare, and democratic participation. She presented a comprehensive framework addressing three core dimensions: connectivity (infrastructure and devices), accessibility (universal design and standards), and digital skills (literacy and confidence).
Rygg noted that 1.3 billion people worldwide live with disabilities, representing one in six of the global population, and require inclusive design from the outset rather than retrofitted solutions. She challenged paternalistic language often used in inclusion conversations, arguing that talking about “including” vulnerable groups can be condescending—excluded populations want to contribute rather than simply be included.
Yu Ping Chan from the UN Development Program reinforced this rights-based framing, connecting digital access to broader development goals. She warned that the future AI revolution could exacerbate existing inequalities, with projections suggesting only 10% of AI’s global economic value will accrue to Global South countries except China.
## Global South Innovation and Leadership
### African Innovation in Accessibility
Irene Mbari-Kirika, founder of Enable in Kenya, presented a compelling case for African leadership in accessibility innovation. She challenged the conventional narrative of Africa as merely a consumer of technology: “Africa must not only be a consumer, we must be a creator, a manufacturer and a global supplier of accessible technologies, designed and built on the continent by Africans for the world.”
Mbari-Kirika highlighted Kenya’s groundbreaking ICT Accessibility Act, which institutionalises digital inclusion and aligns with comprehensive accessibility standards. She emphasized the economic dimensions of accessibility, noting that the global assistive technology market is projected to reach $32 billion by 2030. With 15% of the global population living with disabilities, this represents significant untapped market potential.
She provided specific examples of African innovation, including SignVerse, which demonstrates how the continent’s mobile-first approach offers unique opportunities for inclusive design. However, she identified specific needs for African innovators including support for data sets, design, packaging, and bringing products to market.
### Reframing the Inclusion Narrative
Mbari-Kirika offered a powerful reframing: “Digital inclusion is not about making room at the table. It is about building a table where everyone has a seat and a voice.” This perspective shifts the conversation from viewing excluded groups as beneficiaries of assistance to recognising them as contributors with untapped potential, emphasizing empowerment and participation rather than charity or accommodation.
## Regulatory Frameworks and Implementation Strategies
### European Approach to Accessibility Legislation
Inmaculada Porrero from the European Commission outlined the European strategy for building comprehensive accessibility frameworks through three key components: building stakeholder awareness, creating specific policies with clear obligations and deadlines, and utilizing existing international standards like WCAG (Web Content Accessibility Guidelines) and EN 301549 rather than starting from scratch.
The European model emphasizes legislation with clear enforcement mechanisms combined with technical standards. Porrero noted this approach provides certainty for businesses whilst ensuring meaningful progress on accessibility, applying to both public and private sectors for comprehensive coverage.
However, Porrero identified a critical challenge: the lack of accessibility knowledge among ICT professionals. She emphasized the need to integrate accessibility requirements into university curricula and professional development programs, noting that many developers, designers, and decision-makers remain unaware of accessibility requirements and legislation.
### International Coordination and Standards
Dan Sjoblom from the Swedish Post and Telecom Authority emphasized the importance of international coordination, expressing hope for extending European accessibility standards globally through UN systems. He highlighted Sweden’s work through the IPRES program (formerly SPIDER) with 25 sub-Saharan countries, demonstrating practical approaches to international cooperation.
Sjoblom noted that the cross-cutting nature of digital policies requires collaboration across government ministries and sectors, making coordination particularly challenging but essential. He emphasized the crucial role of trusted community institutions—libraries, civil society organizations, and municipal citizen services—in reaching populations that might otherwise be left behind.
## Multi-Stakeholder Collaboration and Economic Dimensions
### Whole-of-Society Approaches
Yu Ping Chan emphasized the need for comprehensive collaboration involving government, private sector, and civil society organizations. This “whole-of-society approach” recognizes that no single actor can address digital inclusion challenges alone, with each stakeholder bringing unique capabilities and reaching different populations.
The discussion revealed significant challenges in engaging the private sector development community, which often lacks awareness of accessibility requirements. Speakers identified the need for comprehensive education and training programs targeting developers, designers, and product managers globally.
### Economic Benefits and Business Case
Multiple speakers emphasized significant economic opportunities presented by digital accessibility. Mbari-Kirika noted that digital accessibility “is not a sentimental issue, it is a sound investment and a strategic opportunity for growth and innovation.” The business case extends beyond disability-specific markets, as universal design principles benefit broader user bases and accessible products often perform better in challenging conditions.
Minister Grover Aukrust connected digital inclusion to broader development goals, warning that failure to close digital gaps will increase inequality and prevent achievement of employment, education, and development objectives.
## Implementation Challenges and Practical Solutions
### Knowledge and Capacity Gaps
The discussion identified persistent challenges requiring ongoing attention. A fundamental issue is the lack of accessibility knowledge among ICT professionals worldwide. Despite existing legislation and standards, many developers and decision-makers remain unaware of accessibility requirements.
This knowledge gap is compounded by the rapid pace of technological change. Minister Grover Aukrust specifically asked how policy can change as fast as digitalisation is moving forward, highlighting this as a critical challenge for effective governance.
### Innovative Approaches and Adaptive Design
The discussion highlighted innovative approaches to accessibility challenges. Fredrik Matheson shared examples of adaptive technology design, including “The Continent” newspaper’s WhatsApp and email delivery system, which demonstrates how services can be designed to work across different technological capabilities and preferences.
He also provided a practical example of systemic complexity through his daughter’s experience applying for a police certificate, illustrating how even simple processes can create barriers when not designed inclusively.
## Key Insights and Future Directions
### Fundamental Reframing Achieved
The discussion achieved a fundamental reframing of digital inclusion from a charity or service delivery issue to a human rights imperative with significant economic implications. The shift toward viewing excluded populations as contributors rather than beneficiaries represents a particularly important evolution, emphasizing empowerment and participation rather than accommodation.
### Design from the Start
A critical consensus emerged around the importance of designing accessibility from the beginning rather than retrofitting solutions. This “universal design” approach benefits everyone while avoiding the higher costs and limited effectiveness of after-the-fact accessibility improvements.
### Mobile-First and Leapfrogging Opportunities
The discussion highlighted how regions like Africa can leapfrog traditional development approaches by building accessibility into their digital infrastructure from the beginning. The continent’s mobile-first approach offers unique opportunities for inclusive design that could benefit global accessibility efforts.
## Unresolved Questions and Future Research Needs
Several critical questions remain unresolved, particularly around implementation and scaling of successful approaches. How can effective local innovations be scaled to broader implementation? How can the global community of ICT professionals be effectively reached and trained on accessibility requirements?
The challenge of keeping policy frameworks current with rapidly evolving technology requires ongoing attention. Traditional policy development processes may be too slow to keep pace with technological change, suggesting the need for more adaptive governance approaches.
The lack of shared understanding of digital inclusion across regions creates challenges for measuring progress and evaluating interventions. Developing common metrics and evaluation frameworks could help coordinate efforts and identify effective approaches.
## Conclusion
This Internet Governance Forum discussion demonstrated both the complexity of digital inclusion challenges and the potential for coordinated global action. The high level of consensus on fundamental principles—particularly the human rights framing and the economic benefits of inclusion—provides a strong foundation for future action.
The emphasis on comprehensive, multi-stakeholder approaches recognizes the complexity of the challenges while pointing toward practical solutions. Perhaps most importantly, the discussion challenged paternalistic approaches to inclusion, emphasizing instead the need to build new systems that harness everyone’s potential.
The shift from talking about “digital divides” to “meaningful connectivity” represents more than semantic change—it reflects a deeper understanding that access alone is insufficient. True digital inclusion requires addressing multiple, intersecting barriers while recognizing the agency and potential contributions of all users.
The unresolved questions identified provide a clear agenda for future research and action, requiring sustained commitment from multiple stakeholders, innovative approaches to governance and coordination, and continued emphasis on the human rights and economic imperatives that drive the digital inclusion agenda.
Session transcript
Fredrik Matheson: Hello, everyone. Thank you for joining us here in the room via Zoom and via YouTube. For those in the room, I want to remind you that you’ll need to put on your headsets to be able to hear. It needs to be on channel one, and the receiver needs to be on the table. It needs to be in line of sight of the transmitter. This is what enables us to have a quiet conference here. Very excited. So in this session, we want to open up a truly international conversation about digital inclusion. So we’ve brought together voices from different parts of the world. We have policymakers, we have researchers, and we have industry experts. And together, we are going to explore how we can tackle the digital divide in very real and practical ways. One of the things we’re going to be talking about today is what does it take to make sure that no one is left behind in the digital world. As our world becomes more closely enmeshed with the digital, how do we make sure that no one is left behind, no matter their background or where they live? We’re going to be hearing examples of what works well from the Nordic region and from the global south. And we hope that this discussion will spark new ideas and shared learnings along the way. We have some questions about what does digital exclusion really mean today? Why does it persist? And most importantly, what can we do to close the gap and make digital inclusion real? If you’re on Zoom, make sure to share your questions via the chat and we’ll do our best to bring them to the stage. There are also microphones here on the side for people who are in us here with the room. So I’m going to briefly introduce everyone. My name is Frederick Matheson. I’m moderating here today. Then we have Osmund Grever-Alqvist, who is the Norwegian Minister of International Development, who is responsible for international development in countries outside the OSCE, Middle East and Afghanistan. He’s also responsible for Norwegian humanitarian efforts. Joining us via Zoom is Ima, who is a senior expert on disability at the European Commission. She’s been leading work on the rights of people with disabilities with a focus on accessibility and assistive technologies since 1991. So if you’ve heard of the European Accessibility Act, we have much to thank her for there. With us, we also have Yu-Ping Chan, who is the senior program officer in the Office of the Secretary General’s Envoy on Technology at the United Nations. She coordinates the team’s work on follow-up of the Secretary General’s roadmap for digital cooperation and digital-related aspects of the Common Agenda Report, in particular, the Global Digital Compact, the GDC. With us, we also have Dan Sjöblom, who is the Director General of the Swedish Post and Telecom Authority. He was appointed by the Swedish government to that role in 2017. Previously, he was the chairperson of BEREC, the Body of European Regulators for Electronic Communications. communication. And I’m super excited to have Irene Mbadi-Kirika with us today, the founder and executive director of Enable, who is a recognized dynamic global strategic leader and executive level innovator who is bringing assistive technologies and key legislation to her native Kenya on making digital accessible for all. Later on, you should check out her website. There’s a film on how the entire project came to be. You will not leave untouched by that film. It’s fantastic. Also super excited to be here on stage with Malin Idig, who is the head of Norway’s accessibility watchdog, so the authority for universal design of ICT. She has been working to transform it into a data-driven powerhouse. So here in Norway, public entities have to register a web accessibility statement, which has had a tremendous impact on making sure everyone actually follows the rules that have been in place for so long. Very, very happy about that. And also joining us is Maja Brintesson, who is a research fellow at Lore de Regio. She has a background in sustainable development and background in management studies. And we’re going to be hearing a keynote from her shortly on digital divides and more. So super excited by that. But first up, I will explain how the format’s going to be. We’re going to have four keynotes. We’re going to have the minister on stage in a moment. After the keynotes are done, everyone’s going to be back on the panel, and I’ll be back up here on stage for a panel discussion, about 30 minutes. We are going to be super strict with the schedule, so anyone who speaks for too long will be stopped. And remember to post your questions on Zoom or in the mics here, and we’ll do our best to bring that to the panel. So minister, I would then like to… invite you up on stage. Let’s have a big round of applause.
Asmund Grover Aukrust: Great, dear friends, from the Norwegian side, we are very proud and honored to welcome you here and to host this very important conference. And also for me personally, as representative for this county in the Norwegian Parliament, I can also say welcome to Lillestrøm and we are very honored to have you here. But most importantly, I have very much looked forward for this debate that our conversation that we will have today and I look forward also to listening to your comments and questions later on. Because despite great progress, and even though 92% of our planet now has internet coverage, one third of our population is still offline. The biggest challenge is therefore not coverage, we have to talk about meaningful connectivity. We need measures that addresses both coverage and usage barriers, such as infrastructure gaps, and policy and regularity, uncertainty, inequalities, limited affordability of devices and services, and digital illiteracy. Meaningful connectivity is a particular challenge in low income countries. Even where coverage exists, barriers remain. Intersectional inequalities across gender, race, age, disability status and rural communities. In addition to this, limited affordability of devices and services, lack of education. and local content and local languages continue to hinder widespread internet usage. Concerns about online health, safety, security and trust may also prevent further adaptation of digital services. Not to mention the risk and harm that can occur online, especially for women and children. Meaningful connectivity is about basic human rights, including the right of information and freedom of expression. I would like to highlight the importance of digital public infrastructure. It encourages competition and fosters innovation and fiscal resilience, and it can generate spillover effects across society, institutions, markets and businesses. Safe DPI can shape systems, public trust and reduce digital gaps and promote inclusive economic and social development for all. DPI is therefore a priority for Norwegian development cooperation. Finally, let me also highlight the importance of open source digital solutions, including open source DPI, like Norway’s Alt-in for government to people and businesses interaction. Digital ID is another essential part of DPI. It opened up for a wide range of government services for citizens and businesses and protect the users. Norway supports digital ID solutions developed in India that is now being rolling out in 26 countries. Now I look forward to listening to the rest of the debate and to. to listening and learning from all of you. Thank you so much.
Malin Rygg: So the digital everyday life is here for most of us. We use technology for big and small tasks alike. At work, in our free time, for school, entertainment, and staying connected with others. The digital solutions that have emerged this last years has given us opportunities that we could only have dreamed of for 20 years ago. But is everyone able to keep up? I’m Malin Rigg, and I’m delighted to be here and to gather with all of you to have this discussion on this important topic. The authority that I head has the obligation and mandate under Norwegian law to help remove digital barriers and prevent new ones for being created. And yet we see many people that are still being digitally excluded. So who are they? Why are they being excluded? And what should we actually do about this? As mentioned, there are 8.2 billion people in the world. Technology is evolving fast, and digital services have become the norm. But this acceleration, as it continues, also accelerates the risk of deepening inequality. Digital inclusion is essential for achieving the UN Sustainable Development Goals. Because without education, decent work, sustainable infrastructure, or equal access to health care, not all can participate. So it all depends on a meaningful digital participation in this age. So, it was mentioned 2.6 billion people are still offline, according to the UNDP. And digital exclusion reinforces social exclusion. It reinforces it in education, employment, but also for democratic participation and freedom of speech. The digital divide is no longer just about access, it is fundamentally about human rights. So did you know that 1.3 billion people worldwide live with a disability? That is one in six of us. It’s a diverse group that includes people with physical, cognitive and sensory impairments or other health conditions. We all bring unique skills, perspectives and contributions to society, but that is only if society is designed to include all of us. And that is why accessibility is foundational to human dignity, opportunity and equality. Ensuring accessibility means building systems that uphold rights and empower all people to participate fully in the digital world. Rights, access and inclusion are interconnected. And to achieve true digital inclusion, we must therefore act along these three core dimensions. Connectivity, building and maintaining infrastructure that ensures everyone has affordable and reliable access to the internet. Accessibility, designing services that are usable by everyone, regardless of ability, language, literacy and trust. And digital skills, enabling people to confidently and safely participate, from using basic tools to navigating complex digital systems. So there’s a gap and that is a gap we have to mind, because the distance between those who are digitally included and those who are not is growing. And we need a framework to help us bridge it. To close this gap, we must work on two levels, the individuals and the societal. And I like to point to this as the gap model that provides a structured way of thinking about this. It shows how we can lower expectations on the societal level and at the same time give individuals that can’t reach the help and support they need. On the connectivity part, from societal level, we can work on infrastructure, broadband, mobile networks and assistive tech compatibility, for instance. For the individual, we can ensure that everyone has a device, that it’s affordable and that they also have the subscriptions and anything that they need to connect. On the accessibility, from the societal level, we can work on universal design, inclusive standards and regulatory frameworks, as the EU has been doing with their directives in recent years. On the individual side, we can help with assistive technologies, adaptable user interfaces, glasses, screen readers and so forth. When we talk about digital skills, from the societal level, we can work on competence among developers, designers, leaders, teachers and so forth. And from the individual side, we can work on digital literacy, user confidence and lifelong learning. This shows that exclusion is not only about physical access, it can also be about lacking tools, trust, usability or the know-how to engage meaningfully. Exclusion must be designed, taught and built into systems, not left to chance. Earlier this month, I attended the Inclusive Africa Conference 2025 in Nairobi. It offered a fresh and important perspective for me and reminded us that the Global South doesn’t need to simply try to catch up, but they can leapfrog ahead and learn from our mistakes. In Kenya, the norm is mobile first, and with a young population, 70% under the age of 30, there are possibilities, but also difficulties that are different from ours. But what stood out is their commitment to standards from the start, enabling accessibility and inclusive innovation from day one. Where we in the north have been kind of stumbling through in the last 20 years, with a design now fixed later, the south can choose to build better from the beginning. Our context may differ, but the goal is shared, digital participation for all. And again, the core elements remain connectivity, accessibility, and digital skills. Before I close, we live in different parts of the world, but we are working toward the same future, a future where everyone, regardless of ability or background, can belong, contribute, and thrive in the digital world. I’m truly grateful to take part in this panel alongside such knowledgeable and committed voices, and to share our experience as part of a shared global effort. So let’s build this future together. Thank you.
Fredrik Matheson: Thank you, Malin. Now we are going to welcome Maja Brintesop from Nordregio up on stage. As soon as we get the presentation up.
Maja Brynteson: All right, I have a presentation. I don’t know if it’s… It should be coming. It should be here soon. But in the meanwhile I can just start by presenting myself. As Fredrik has said, my name is Maja Brynsson and I work at Nordejo, which is a research institute based in Stockholm. And we are focusing on regional planning and development. And I will be speaking about the state of digital inclusion in the Nordic and Baltic countries. Over the past few years, Nordejo has led and contributed to several research projects focused on digital inclusion. And in this presentation I will share key insights and findings from that work. So let’s start with why digital inclusion is still an important topic in the Nordic and Baltic region. We often refer to the Nordic and Baltic paradox here to argue for the importance of digital inclusion. And I will try to explain what we mean with this paradox. The Nordic and Baltic countries are among the most highly digitalized countries in Europe. For example, according to the latest Digital Economy and Society Index, Kjellfors DC, Finland, Denmark and Sweden are among the top performers in Europe when it comes to basic digital skills among citizens. And the picture is also positive with respect to government services, with high rates of e-government usage, as well as digital public services for citizens, which are high in Finland and all of the Baltic countries. But as the Nordic and Baltic region becomes increasingly digitalized, Digital tools and skills are now essential for participating in everyday life, and the expectations placed on individuals continue to rise. There is a growing reliance on having digital tools and skills for being an active member of society. We are also seeing an increased use of electronic IDs for secure access and authentication. In some countries, you need an EID to do everything from checking your health records and managing your bank account, to booking a doctor’s appointment, to even booking your apartment building’s laundry room. We are also seeing that digital communication and online platforms have become the norm, shaping how we work, learn and stay informed. But there are still significant disparities in both access and ability. Broadband coverage remains uneven, especially in some of the rural and remote areas in this region. And there are also varying levels of digital skills, both among and within different population groups. And these gaps continue to challenge inclusive digital participation. So it is important to recognize that not everyone is equally included in these digital societies. Certain population groups have been identified as being at risk of digital exclusion. And this slide provides a closer look at the various groups identified as at risk of digital exclusion. And at Nordregio, we have looked more closely at older adults, people with disabilities, immigrants, people with low or no education, rural communities, young people and people with low income. Now it is important to say that not everyone is equally included in these digital societies. And this slide provides a closer look at the various groups identified as at risk of digital everyone in these groups struggle with digitalization. Many are doing just fine. But overall, these are the groups that research consistently identifies as being more vulnerable to digital exclusion. And at NordicU we often describe digital exclusion as being multidimensional and context-specific. And what do we mean with this? By multidimensional we mean that people often become at risk when several factors overlap. For example, an older adult living in a rural area with limited income is likely to face more barriers than someone of the same age that is affluent and living in a well-connected urban area. And by context-specific we mean that different groups face different challenges in different parts of digital life. Some may struggle with access and usage of EIDs due to lack of devices or broadband. While others may find digital platforms difficult to navigate due to language or accessibility issues. So while the Nordic and Baltic countries are at the digital frontier in many areas, we see that a share of the population remains at risk of digital exclusion. So here is the Nordic and Baltic paradox. The more digital our societies become, the greater the risk of deepening the digital divide. And we’re seeing this divide emerge along familiar lines. Age, geography, disability, language and socioeconomic status. These gaps matter and they create significant consequences. So the consequences of this digital divide are profound and far-reaching. And they include, for example, not getting access to important information, missed job and education opportunities, barriers to civic engagement and democratic participation, limited access to health care, and other essential services, challenges in performing economic activities and increased risk of social isolation. So digital exclusion doesn’t just reflect existing inequalities, it can actually deepen them. So in our research, we have looked at what the most common barriers in the Nordic and Baltic countries are for digital inclusion. And you can see them here. And when we talk about digital inclusion, we usually divide them into two main categories. First, we have access barriers. These are about having the physical means to participate in digital life, like access to a stable internet connection or owning a digital device, such as a smartphone or computer. Second, we have capability barriers. And these relate to the skills and confidence needed to use digital tools. And this includes digital skills, but also literacy and language, and a lack of domain knowledge, and even a lack of trust, a feeling of security or willingness. And both types of barriers can prevent people from fully participating in our digital societies. And often, they overlap. So where do we stand today? What is the status of digital inclusion in the Nordic and Baltic region? This region is perceived as digitally advanced countries, and in many ways, that’s true. But the picture is more complex. We are leading in some areas, but we are lagging behind in others. And in our research, we can see that one key challenge is that there’s no shared understanding of what digital inclusion actually means across the region. This makes it harder to coordinate efforts and measure progress consistently, especially when we may not be talking about the same things. We also see that some groups remain at risk of exclusion, whether due to limited access, lack of digital skills, or insufficient support. Another issue that we see is the lack of user involvement in the design of digital solutions. Too often, services are built without input from the people who are most affected, those who are already at risk of being left out. So while we have made great strides, there is still important work to be done to ensure that digital inclusion is a reality for everyone in this region. So how do we move forward? To reach the digital divide in the Nordic and Baltic region, we need a lot of things. For example, we need more targeted policies, we need inclusive design, and we need expanded support systems that meet people where they are. On this note, key enablers of digital inclusion already exist in our communities. Libraries, civil society organizations, and municipal citizen services all play a vital role in reaching those who might otherwise be left behind. These actors often provide not just access, but also guidance, training, and human support. But these actors also need a mandate and support to work with these questions. Lastly, it is important to remember that not everyone can or wants to be digital. Maintaining analog services and alternative options is essential to ensure that everyone, regardless of their digital ability or willingness, can access the services they need and be a part of our societies. On that note, I end my presentation. Thank you so much for your time and attention. If you have any questions or would like to learn more about our work in Nordic, please don’t hesitate to reach out.
Fredrik Matheson: The final keynote, Irene, please join us on stage, and everyone, you absolutely have to check out the website, because when I was doing research for this panel and looking at the state of accessibility in Kenya, just fantastic efforts done, so I’m really looking forward to your keynote.
Irene Mbari-Kirika: Good afternoon, everyone. All protocol observed. So it’s with great honor that I stand before you at the Internet Governance Forum, a platform that champions open dialogue, shared responsibility and collective progress in our digital age. My name is Irene Barikirika, and I’m the founder and executive director of Enable, a nonprofit organization that’s based in Kenya with a mission to empower African youth with disabilities through technology. So for the last 15 years, we’ve been championing digital accessibility to ensure that persons with disabilities are not left out. At Enable, we believe in a future where no one is left behind in this digital revolution. Our assistive technology labs are located at at least eight schools for the blind in Kenya, provide blind and visually impaired students with essential digital skills to help them navigate the world independently. Using the power of collaboration in advancing accessibility, Enable hosts the Inclusive Africa Conference each year, bringing together local, regional, international stakeholders to co-create solutions for digital inclusion for persons with disabilities. One of the conference’s most significant outcomes has been the development of Kenya’s ICT accessibility standard, which is very important to note. It’s for products and services, and it’s the only one in Africa so far. But we do have plans. We are currently working with other African countries to scale these standards across the continent to accelerate Africa’s progress towards a more inclusive digital future. Last week, we received a powerful affirmation of our work when we were named to the Forbes 100 accessibility list. This global recognition celebrates the world’s most impactful organizations and innovations, driving progress in accessibility and disability inclusion. It validates our unwavering belief that persons with disabilities deserve dignity, opportunity, and full participation in the digital age. Ladies and gentlemen, according to the GSMA Mobile Accessibility Sub-Saharan Africa 2024 report, Africa is a mobile-fast economy. It’s a mobile-fast continent with smartphone penetration of about 52%, and it’s projected to go up to about 81% by 2030. The World Bank also estimates that Africa’s population will reach approximately 1.7 billion by 2030, with 70% of this population made up of young people between the ages of 15 to 24. This will make Africa the youngest continent in the world, and it is a well-established fact that global corporations must establish a strong presence in Africa or risk falling behind in the race for future growth and innovation. In Africa, we are already seeing the Gen Z awakening, where digital tools are driving civic participation, creativity, and most importantly, holding institutions and individuals accountable. These young people are our future, a generation of digital natives poised to shape the world. Yet among them, there are millions of brilliant, creative and determined youth with disabilities who risk being left behind, not due to a lack of talent or ambition, but because we have failed to design new technologies with their needs in mind. We must prepare African youth not for yesterday’s roles, but for the opportunities and demands of tomorrow. In today’s world, AI literacy is essential at every level, a critical tool to transforming the digital divide into a powerful economic opportunity for the next generation, especially for our young people. The global assistive technology market is projected to reach 32 billion by 2030. Africa must not only be a consumer, we must be a creator, a manufacturer and a global supplier of accessible technologies, designed and built on the continent by Africans for the world. Digital accessibility therefore is not a sentimental issue, it is a sound investment and a strategic opportunity for growth and innovation. African governments, including Kenya, have introduced tax incentives to support the development of digital solutions for the export market. Combined with rising literacy rates, increasing internet access and deeper hunger for success, this positions Africa as the next continent from which the world can innovate and where the next wave of digital breakthroughs will be born. Ladies and gentlemen, the journey towards mainstreaming digital accessibility for persons with disability in Africa is already underway. A case in point in my own home country, Kenya, which recently passed the landmark law for persons with disability. Accessibility Act of 2025. This law institutionalizes digital inclusion, aligning with the ICT accessibility standard and setting a strong example for the continent, meaning that compliance now is not an option, which is great for persons with disabilities to be included. Our vision for digital accessibility in Africa will require the expertise, knowledge and resources of everyone gathered here at this forum. The recently concluded 6th Inclusive Africa Conference has launched a series of year-round working groups aimed at sustaining momentum and driving measurable progress ahead of the seventh edition of next year of Inclusive Africa Conference. These groups present a valuable opportunity for individuals and organizations to connect with an established platform, contribute your expertise, resources and innovation to advance digital accessibility across Africa. I warmly invite all of you to join these working groups and actively shape a more inclusive and accessible future for everyone. May this year’s Internet Governance Forum be remembered as the moment when the global digital community came together to decisively champion digital inclusion for all. One second. And to advocate for a universal standard for digital accessibility, one that applies equally to developing countries as it does to the rest of the world. I keep saying that my mobile phone doesn’t change if I go to Africa or I come to Norway or I live in Washington, D.C. It’s the same mobile phone. So digital accessibility standards, we should be able to follow digital accessibility standards across from continent to continent and country to country. As I conclude, digital inclusion is not about making room at the table. It is about building a table where everyone has a seat and a voice. Thank you.
Fredrik Matheson: Thank you, Irene, that was wonderful. All Protocol Observed is also the publisher of a newspaper called The Continent that some of you might be getting via email. I love how the ways of building digital technology have been super clever because they have a WhatsApp channel and an email delivery so you get a PDF so you don’t have to consume valuable network capacity. It’s such a brilliant way of shaping the products for local conditions and is super useful for reference later on also. So thank you everyone on Zoom. Remember to ask us questions. We’re going to have a little session now and then we’re going to open up the floor afterwards. So first I’m going to ask our Minister of International Development, Osmond Grever-Alqvist, what do you see as being the most persistent emerging barriers to digitalization? And not only that, what does regulation and how does standardization play books in guaranteeing universal accessibility and inclusive design help?
Asmund Grover Aukrust: Well, thank you for very good speeches that we just heard and I learned a lot from listening to my colleagues up here at the scene. Well I think digitalization creates so many possibilities. It creates so many possibilities for inclusion. As you finished up with, it’s the same mobile phone you can use in Kenya or in Sudan or in Washington D.C. or here at Lilleström. So, I mean, it really creates so many possibilities for inclusion. However, there is also a danger that this will create more inequality, because there will be a bigger division among those inside of the digital world and those outside. So, therefore, I think it’s very important that we are so vocal about this, and this should be a very important part of the discussion. We learned through the other speeches here, I mean, who are the groups that might be the most vulnerable. It could be elderly people, people with disabilities, people living in rural areas, people with lack of education that might be more vulnerable. And, of course, there will be different solutions and different ways to tackle these challenges. But I think, of course, the government has a really important role here, and a really important role of having universal designs and to reach out to its whole population. I know that when we’re talking about disabilities, I know that from the Norwegian side, we have been working together with Kenya in a way to have what we call the Global Disability Innovation Hub in Nairobi, which has created fantastic results. But as I started off with, I think the most important one is that we try to seek the barriers. And this is also, of course, a discussion that we need to have all the time, because the digital solutions, they are changing all the time. And therefore, we also need to change the policy as fast as the digitalization is moving forward.
Fredrik Matheson: Thank you. question from Ima who is joining us via zoom. Let’s just make sure all the technology is With us. There you are. Very good so we have a question for you Ima and that is what strategies for regulation and Enforcement have proven effective in promoting digital inclusion across Europe and also What lessons should we take forward as digital policy evolves?
Inmaculada Porrero: Okay, so I mean building building up an agenda for the Accessibility it takes it takes really time and it takes times to prepare the field to Gain knowledge first about what are we talking about? It is not evident for people from this field to know. What do we mean by? digital inclusion and digital Accessibility I think it is What’s based on what we did in Europe? There was a need to have first that building up to identify the stakeholders to raise awareness to better understand The field but then once is done. I think we need to concretize what we mean by digital Accessibility and digital inclusion and the way of concretizing is to have specific policies that address the matters Policies that are reflected in general digital policies so that the general documents reflect well the Commitment and the right to be included in In the digital world by persons with disabilities in the digital development and to do that You need a specific and concrete actions on accessibility at the end What really worked in Europe and I think made really the change is The legislation to have clear legislation with Obligations for the private and the public sector to ensure that certain products, services, infrastructures that are going to be used and are used by people includes also, they comply with accessibility requirements so that persons with disabilities can use them and can access them on equal basis with others. Then, together with the legislation, we also need to have clear technical standards. And the strategy that we have used is really not to start from scratch. I think there is a lot already in this round table, these presentations that we just saw have illustrated this. There is already a lot being done. So should a country wish to advance on this matter and to improve the situation, I would say, look around. We did the same at the time that we were preparing our flagship legislation, the European Accessibility Act, and our standards include mandate EN 301549 resulting from mandate 376 at the time. I’m talking about 2005. We look around and we saw what countries that were more advanced on accessibility at the time were doing. In particular, we partnered with the US Access Board to have standards that were coherent. I’m really happy that now in different parts of the world, like Canada or Australia, the European standard is being used. And also, as Irini was reflecting, also this standard has been at the basis of the developments in Africa and in particular in Kenya. So I would say have clear objectives and put policies that raise awareness, that address and clarify what needs to be done, but then put it in legislation with clear deadlines, clear obligations, and clear enforcement. enforcement mechanism and use technical standards or technical specifications or regulations in order to say what exactly needs to be done. If all this goes hand in hand, we have a bigger chance to achieve the objective of digital inclusion. Thank you.
Fredrik Matheson: Very good. Thank you very much. Next up, we have a question for you, Pink, but I think you might have to hop over to the next chair so we can get you in front of the mic. I’m sorry. I should have told you that earlier. Sorry. I need to make sure we’re accessible and that everyone can hear us. This is Morton. So my question here is, in your work, you’ve emphasized the importance of a holistic multi-stakeholder collaboration involving governments, the private sector, and civil society, the three Ps, really, to tackle the complex and interconnected nature of digital divides. In practice, what does effective collaboration look like, and what are the biggest barriers that we need to overcome to make this work?
Yu Ping Chan: Thank you so much, Frederic. You’d mentioned, actually, part of the work that I’ve done at the United Nations, where I used to be in the Office of the Tech Envoy, and now from the United Nations Development Program, that question that you asked in terms of what have we seen that works and what are the barriers that we see in countries is particularly important, particularly profound. I want to really emphasize the point that Irene had raised, where we’re really looking at challenges that the whole world faces. So for instance, Maya had spoken about the challenges in the Nordic and the Baltic regions. Immaculata had just talked about the European challenges. But think how much more profound the challenges are in the global South and the global majority in developing countries itself, where they don’t have the legislature, the infrastructure, the skills, the capacity, and a lot of these public services that are present in richer, industrialized, and developed countries. And those are the challenges that we as UNDP, serving in 170 countries… and territories around the world have to contend with when we try our best to support the national governments and the developing countries that we work with. So for instance, when we talk about digital inclusion and we talk about inequalities, it’s not just those inequalities within societies and within groups as well. It’s also the divides that exist between countries. And that’s particularly important if we are gathering together as a global community, talking about digital issues and digital cooperation. So for instance, as we talk about the opportunity that AI brings, we also have to recognize that the future AI revolution could also exacerbate already existing inequalities. When for instance, it’s projected that only 10% of the global economic value generated by AI in 2030 will accrue to the global South countries, except for China. When you consider the fact that right now, over 95% of top AI talent is concentrated in only six research universities in the US and China, you think about the fact that perhaps the global opportunity that is posed by AI will fundamentally leave behind many of these developing countries and widen the inaccessibility and the exclusion that they actually currently actually address and feel when it comes to issues such as accessibility, affordability, connectivity, and so forth. So that is a fundamental concern from the perspective of the UNDP and what we would think is a fundamental barrier to considering the question of digital inclusion. And then when you come to the practical level, for instance, right, you also have techno-fragmentation. So the minister emphasized the importance of the role of government, but it’s actually a role of government that needs to be holistic and comprehensive. It can’t be individual tech solutions that are designed by one ministry for a particular case for a particular situation at a time. What we need to think about are digital foundations that cut across the entirety of government, that are interoperable. This is where, for instance, the minister’s emphasis on digital public infrastructure is something that the United Nations Development Programme also focuses on, that we’re creating these digital frameworks that, like the roads, the railway systems, that allow the burgeoning of a society and economy must also undergird the entirety of delivery of public services in a country. So that kind of inclusion needs to be intentional from the start. Then also, for instance, Frederick, you had mentioned the importance of all of society, the different Ps that you mentioned. So indeed we need a whole-of-society approach, where it’s the private sector also working with government, with a people-centered focus, to ensure this type of delivery. So these are all important aspects where we as UNDP really think that we need to have a more comprehensive, holistic approach to supporting national governments, both in developed countries, but particularly for developing countries as well. The last point I want to underscore is really the importance of local ecosystems. We need to build capacity in developing countries to, exactly as Irene said, be co-creators in this digital and AI future. And that requires skills, investments, global capacity building and upskilling, and as well as additional resources that are put into these types of efforts around the world.
Fredrik Matheson: Thank you. Thank you very much. We’re going to head on next to Dan Hjörblom, Director General of the Swedish Post and Telecom Authority. And my question for you is, what policy levers are most effective to reach underserved populations? And what can regulators do to drive this kind of equitable access without stifling innovation? And this is something I see in my neighborhood where we have kids come over and they need Wi-Fi because they have old phones, but they don’t have a subscription. Their parents can’t afford to have a subscription for each of their kids. And then they’ve come up with this hack where they call via Snapchat or FaceTime. FaceTime doesn’t work because there’s no ID connected to it. So there’s sort of these workarounds. But my question is more, what about like at the policy level? What can we do?
Dan Sjoblom: Yes. Thank you for that question. I’d like to say, I think, three things. How we have addressed some of those things in my authority and things. we do. First, I’d like to say just that being a telecom regulator has changed dramatically over the last years. I think anyone who works in that business in the room will recognize that we are drifting from telecoms into ICT and into digital, and it’s a whole new environment and a much, much more complex one where also from a government standpoint, it’s getting more and more difficult because these policies are cross-cutting in a way that we haven’t seen before. So a lot of collaboration needed at local, national and international levels, which makes me happy to be here today. Now on connectivity, which is the first step, everyone needs to be connected. We have had for a long time a policy where we want to establish stable market conditions for private entrepreneurs to build out connectivity, both fixed and mobile, and that has taken us very far. We have now, I think, above 98% connectivity. People can connect to high-speed internet, over 98% of everyone in the country. But that is just the first step, and we want to get to 100%, so there’s also a subsidy program which we are running with the aim of getting everyone up to be able to connect. But as we’ve heard other speakers in this panel mentioning, we see that with the 98% able to connect, there’s a much lower percentage of actually connecting. And why is that, and how do you address that? We heard the currently weaker groups. We also heard, I think, good comments about this is an ongoing development, so those who are in the weaker groups today, or not in the weak groups today, may well find themselves in weak groups tomorrow and AI is of course a very big challenge for anyone to become connected and work with. So we have developed or we have a program which is called Digital Today at the regulator in Sweden which we are very happy and proud about. It works with other government services, it works with municipalities, it works with academia, but most importantly it works with civic society which I think is really the key message here. Because when you find those who can connect but have chosen for various reasons not to connect, it often links to feeling unsafe on the internet. It’s a dangerous place, online safety is not where it should be and cyber security is, I think we’re all working very hard on cyber security but it’s not getting easier to keep up with those who are trying to defraud us of our funds or create problems of many kinds on the internet. And we have come I think to the stage where many of us don’t pick up the phone if we don’t see that the caller is somebody you know and that’s very different from 20 years ago when you of course never knew who called when you used to pick up the phone. But working with the trusted institution like libraries were mentioned earlier but also working with the civic societies associations that are there for these weaker groups is a very powerful means. So we have in this digital today program we have close to 400 of those organizations that come together and we create platform material that can be used by everyone and we go out and we meet people with their trusted representatives, and that’s very powerful. And the last thing I wanted to mention, coming back to international collaboration, is that we are very proud as well to have been, for over now seven years, in a program called IPRES today, it used to be called SPIDER, where we collaborate with 25 sub-Saharan national countries, and we run peer-to-peer learning and sharing development issues projects. I think they are presented out here in the stands, so anyone interested in the IPRES program run by the regulator, us, and the Stockholm University, please go out and have a chat with them, and there are many good stories from that, and experiences. Thank you.
Fredrik Matheson: Thank you very much. Irene, okay, so now I’m super excited. So we’ve seen how local and community-driven innovation can make a real difference. One thing is the practical application of it, but then also there’s the next-level effects that can come from it. So from your experience, what are some promising examples of such solutions, and how can we build them in a sustainable way, so it’s not just a one-off, it’s something that can really run over time?
Irene Mbari-Kirika: Thank you for that question. So first off, I’ll say that at Inclusive Africa, as part of Enable, what we did is we realised that Africans are coming up with solutions, assistive technologies, accessibility solutions, but they have no way of pushing these products out to market or promoting them for more people to use. So at the conference, we started something we call the AT Village, and AT stands for assistive technology. And really what happens every year is we do a call for proposals where people submit are the innovations they’ve developed. And most of the time we get about almost 100 innovations from various African countries. And like this year, I think we had about 15 being showcased. And the whole idea is to make sure we showcase these innovations at Inclusive Africa and online so that the innovators can get the support they need. So a good example, you’ll find the person who came up with the idea for the innovation may just be a developer who has maybe a sibling or a cousin who has a disability. And they were trying to figure out, how do we make this happen? And I’m going to speak an example of one product called SignVerse. SignVerse, it was due to lack of communication between the innovator and a friend who was deaf. And because of lack of sign language interpreters, he was like, how can we come up with a solution? And they came up with an AI-based solution to help at least bridge the gap. Because we all know sign language interpreters are very critical, but that model is not scalable. And if you think about Africa, most countries, and just starting with Kenya alone, there are different versions of sign language depending on what your tribe is, what your dialect is, whether you speak English, whether you speak French, Portuguese, and all that. So by the time you have all those sign language interpreters lined up for you, it becomes very complex and very expensive. But this young man developed a product that now it works for Kenya, for the Kenyan market. And now what he’s doing, he’s using AI. In Rwanda and other countries, trying to see what can I do to make sure that my solution for sign language interpretation can be used in various African countries. Because of the difference in sign language, he’s trying to gather a lot of data to be able to at least come up with a good solution. And currently it’s working. Of course he has challenges. Data sets are a big challenge. And that’s where we come in. How do we help African innovators meet their needs? Some of them are just developers with great ideas and it ends there. and they’ll develop a great product. They need to take this product to market. How do we help them design, package, and bring this product to market? How do we help with financing? So those are some of the issues I believe we can help and we can support the African community and African innovators when it comes to really helping bridge this digital divide. Thank you.
Fredrik Matheson: Thank you. I’m so glad to hear he’s also traveling around to figure out what are the local dialects and flavors of it. Every time I pick up my phone, which was made by somebody in California, the software, I have to contend with all sorts of cultural issues. Like, no, you don’t write that way. Yes, I do. That’s how it works in our language. And that kind of thing is just incredibly important to tackle from the start. Okay, Malin, a question for you. So digital inclusion as a human rights issue, how should we understand that? And why do you think it’s critical to frame it in that way in today’s digital landscape? Like inclusion is important, then you frame it as human rights. That’s really interesting. What’s the effect?
Malin Rygg: Well, I think one of the big, or what we’ve seen the last years is that before we talked about digital services and we talk about people like they’re consumers of digital services and they can maybe opt out or if they can’t use them, we’re gonna train them and give them digital skills so they can use them. But at the moment, you see digital services so intertwined with education, with work, with just being able to express yourself in a debate or in newspapers. So by kind of dividing this into connectivity and digital skills, I think we are also losing one part in the middle, which is what Yuping was talking about, how are these services made? A lot of them have just grown from tech companies. from public entities that are making, you know, trying to solve their problem. But when the effect is that you actually, you educate through them, you make it, you know, you have to use them in the workplace, you have to use them to do anything, analog services might be off the table. We see that very clearly in Norway, that if you are actually digitally excluded, for instance, if you have a disability, although you might be digitally very skilled, but the service just doesn’t work for you, or if you don’t have the ID, for instance, that you can use, you actually are in some areas so excluded that you are not able to participate at all. And we did a report with the authority for a survey of a digital education in the primary school in Norway, where we see that, you know, children with dyslexia or visual impairment, they are sometimes actually so digitally excluded that they don’t have the equal right to education that other children have. And that is in a very digital society as ours. So that is why we have to kind of change the mindset, not just talking about digital services being offered to the public, but you actually have to talk about this as one of the key components that you have to have in place for people to participate in all kinds of life. And just to add, it was very interesting, Jyping, with your perspective of, you know, we are also actually excluding parts of the global population, not only the individuals in each country. So thank you for that. It’s very interesting.
Fredrik Matheson: There is a really neat thing in Norway when your kid is getting ready to go to high school. You as a parent will want to check out the different study offerings and, you know, what can they do. And the thing I absolutely love about it is that the agency that handles all the information has translated it to pretty much every language spoken in Norway. So not just the official languages. So if you only speak Filipino, like some of the parents at our school, and you want your kid to study design, you can actually go and check out the curriculum at quite a detailed level. And that kind of inclusivity, so it’s not just the individual who is able to access some system, but it’s also their parents, people around them, ability to frame and anchor those things. There are so many incredibly important parts of this, and for those of you living in Norway who have kids, the sort of rite of passage, I will jokingly say, is getting your EID, because that enables you to actually do stuff. So we have a mobile payment app here, and the kids are like, when can I start using it? Because before that everything is impossible, because nobody will take cash, so you just have to have it. And that’s a very good framing, I think, to use human rights, because then you have individuals who are not, they’re not old enough to be able to drive or vote, but they still have these rights that need to be met by the systems that we have. And I think human rights is a much stronger way of framing it, as opposed to, oh it’s convenient that you could, you know, buy a ticket for the bus when you’re a kid. All of that. Which leads us to Maja, loved your presentation, and you showed us some of the groups that were at risk of exclusion. So are there any particular groups you would like to highlight? Because we see that, you know, the Nordics and Baltics, we can call ourselves front-runner countries, but in many ways it means we’re lifting up to a level where a lot of people are potentially being left behind. What are your thoughts on that? Yeah, exactly, I agree
Maja Brynteson: with that. And so across the Nordic and Baltic countries, there are three groups who everyone is talking about, and it’s older adults, it’s people with disabilities, and immigrants. And then in some countries, such as in Sweden and also Norway, I think it is. we do talk more about the rural communities because we have this connectivity issue that maybe Denmark doesn’t have because it’s a much smaller country compared to Sweden and Norway. And so we see that there are these groups that we and other research consistently identify as being more vulnerable and as I said not everyone in these groups are actually at risk. Many are very very digitally capable. But I think what we need to talk about as well is that these groups they often face similar challenges. So whether you’re an immigrant with lack of domain knowledge that’s also true for some youths that we see they also lack this domain knowledge. So even though there are different groups there are similar challenges across these groups. And some of the countries especially in the Nordics they have stopped actually talking about target groups. They talk more about like the common barriers common solutions approach that we need to identify the most common barriers and then implement solutions that cut across all of these population groups. And I think that’s going to be interesting to follow the next five or ten years how that will work out.
Fredrik Matheson: Fantastic. The necessity of understanding systems in society that are highly digital is something which is very very complicated. So I remember I studied in Finland and all I had to do was call an office on the phone. My boss said call this number you’ll get a national ID. And like a day later I had a national ID. I didn’t have to do anything. The same when I studied in Singapore I just met up. Of course there was somebody to help out do this stamp here look here sign that boom everything works right. It’s fantastic. We have some of the same things in Norway but when you don’t fit in that system. good luck, because then nothing works. You can’t get a phone, so any idea about connectivity, you’re not, nobody’s going to be able to call you, any other sort of rights things or logging into a public website, good luck with that. None of that works anymore. And even if you do have these things, just yesterday, so my oldest daughter is an athletics coach for kids this summer. It’s her first job, she’s 16, and of course Norway is a very well regulated society, so there has to be a so-called police certificate of conduct that can be applied for by the sports group at the athletics team. And to do this, she has to go to a website, she has to sign up with her national ID number, she has to use her bank ID, she has to sign different things, and of course the digital public infrastructure is in many ways very helpful, because it’s all digital. But then there’s something inside that process that required my signature, like a photo of my signature, where me as a designer developing these kinds of services thinking, what’s happening here? But then I look over at her and she’s just completely confused by what are these alien concepts that I’m referring to, because I can sort of see the invisible rules behind all of this. This needs to happen, this needs to be approved, this is going to be the flow. Whereas the screen design is not appropriate for it, there’s not enough supporting information, and many times you can go through the flow and you’ll get it to work, just like you’ll be able to book a plane ticket, but you don’t come out with any understanding of the overall system, it doesn’t upskill. And this is a really important thing in the digital playbook here for inclusion, that the fact that we need to upskill and help people understand. And this is one thing I always find to be lacking, that it goes back to legislation. The legislation has typically not been written to be usable, so there are some experiments with that in Norway now to make them more usable. But these are the things I worry a lot about, because society being so digital means that also people who are… They are connected, they have phones, they have enough money to have a subscription with enough data, and they are fully able to use the devices in all sorts of ways. But then conceptually, they will struggle to understand, how do I apply these different rules? And that’s just like the really skilled ones. Then we have everyone else. So a question for you as a group is, if we don’t close these digital gaps now, what kind of long-term consequences are we facing socially, economically, imagine politically? Any takers?
Malin Rygg: Well, I just want to, because your story just reminds me of a very important point that I didn’t say before, but that is that when we talk about these groups that are vulnerable, and we are talking about like including them, it’s very paternalistic kind of viewpoint, kind of talking, we are all in this bubble and we can include more people and they are just sitting out there and it’s just like faceless groups. These are people that are young people, they are people able to work, they are also older people that all have potentials, all have things to, they just want to do their, you know, everyday life. It’s not like they’re useless and just needs to be included so that, you know, more are included. These are people that are needed in society as your 16-year-old daughter. So I think it’s very important that we kind of really shift the focus, you know, although inclusion is a good term, it mustn’t kind of cloud the point that people just want to contribute and be a part of society and we want them to contribute. These are very important contributions. As Irene was saying about innovation in Africa, we want them to innovate, you know, for the whole international community. So we have to make. more of an effort to make that happen, not just go along and then just say okay and maybe we could include some more. We actually have to kind of see that this
Irene Mbari-Kirika: is the potential going forward. Thank you and just to add to what Malini is talking about, if you think about it for me it’s more of a business, if you think about your business, it’s more of a business benefit. Think about it this way, 15% of the global population lives with some form of disability, 15 to 20 percent depending on where you are and today I’ll name one industry, financial services is the most difficult, the most, the one that truly leaves person with disabilities behind and I’ll give an example. Most people, let me talk about Africa and even the US, what happens is if your banking products are not accessible, online banking or on mobile money applications and stuff like that, if someone is blind and they’re not able to access that product using their assistive technology, they have to get their friend or their neighbor or somebody to transact for them and really when we talk about safety and security in the whole financial process, that goes out the door but also just the independence and dignity that someone has. So what we are saying is it’s important we invest in that space because if you make your digital products accessible, if it’s in financial services, you will have more people with disabilities using your products independently. So focusing on the needs of the users, not what we perceive it to be, really designing and building products with users with disabilities. Get them to be part of the process from the beginning till the end. I always say you cannot go ahead and manufacture a shoe for someone and sell it in the market if someone, no human being has ever tried that shoe. So if you have people with disabilities testing the product and giving feedback, by the time you’re done with that product a lot of people with various abilities will be able to use it because you’ve taken care of some of the most difficult challenges they may be able to experience. So I’m saying this to say that we need to think of it as a business benefit and a way to capture that 15% of the market share that no one has tapped into. Thank you.
Asmund Grover Aukrust: Thank you, and an excellent question. And I think, I mean, the main answer for what you’re asking, if we’re not able to close the digital gap, is that we will have increased inequality. And the consequences will be very dramatic, I mean, both for each individual, but also for our society, because we will not reach our other goals in society concerning employment, education, and so on. If people are not able to fill out an application for getting to school, I think in Norway, if 15 years old, they are not filling out their application, I mean, the system will bring them back in. But for higher education, or for employment, or also to register for paying taxes and so on, or creating your own businesses, I mean, you are on your own. And so I think it’s extremely important, especially when it’s talking about digitalization, that we have this principle of leaving no one behind. And now we are talking about, this is from a Norwegian perspective, but in other countries, I mean, the inclusion is even more dramatic concerning the access to, I mean, just think about access to electricity. In so many countries, you don’t have electricity, it’s almost impossible to be digital without electricity. So therefore, digitalization should be very much higher also on the development agenda. And therefore, it’s important for me to be here and also to be listening to this discussion, because the digital solution… are creating so many possibilities, but it might also create so many problems if you’re not dealing with this in the right way and also with the principle of leaving no one behind.
Yu Ping Chan: Just to say again that I, from the United Nations Development Programme, we fully subscribe to what the minister has just said. Digitalization and digital transformation is truly part of development. If we see the potential of digital and AI, we need to recognize its potential to be an accelerator of development and achievement of the SDGs itself. And this fundamentally means that we have to look to what it means for developing countries. I particularly like this human rights framing around digital inclusion, because if you think about other parts of human rights, the right to information, the right to education, the right to employment in some cases, the right to the highest level of physical and mental health, all of this is inextricably tied up to digital technologies these days, right? And so perhaps it was COVID that brought that realization home to everyone, how intrinsic the need is to have these types of digital services and platforms and how it’s fundamental to these achievement of these basic rights. So the more we understand the way in which our lives are now tied up with this particular device that we use all the time, the more we’ll anticipate the need for governments, all of sector and all of these different stakeholders that are present here today to be part of that conversation around how we actualize a more meaningful digital society for everyone. And that is something that it’s not just a challenge for Norway and Europe, but also for the rest of the world and the international community that we function in.
Fredrik Matheson: Don?
Dan Sjoblom: Yes, no, I fully subscribe to what everyone has said. I just wanted to make maybe one comment and one little hopeful thought. First comment, I think we need to realize that we are in the beginning of a digitalization. This is not something which will be done in a few years time. And I think this has to remain high on the agenda. And I think we have come to the stage where we have realization that this is cross-cutting. It affects every minister’s portfolio, which is very clear back home. I mean, I report to one of the ministers, but everyone has digital in his or her portfolio. And then the importance of standard has been mentioned. And I think that’s something we need to continue to have very high up on the agenda. I think we have, within Europe, we have the accessibility directive, as was mentioned earlier on. We have the work on the digital wallet, which is ongoing. And I’m just hoping that those can, within the UN system and globally, be reached out so that we can see a future where we have universal design, which is really universal, or global at least. It’s not so hopeful maybe today with the global situation being what it is, but we have to keep working on it.
Fredrik Matheson: So a fun fact about Norway is that in the 1960s, there was very limited connectivity. It was difficult to have enough budgets to actually roll out telecommunications. And one of the impacts was, number one, when a village was connected, that was a huge celebration because finally there was a phone. And if you were a doctor, you could get one extra upstairs. It was very hard to get hold of. But there was a thing happening among the telcos, which is that companies would be acquired for their phone lines. So if you had a company in Oslo, for example, and it had like 40 lines going in, then a company might buy you and then take 30 of them and then sell you again, because that would be a way to get more connectivity. And it seems impossible to understand, but this is how they would optimize for business at the time. The same in the Philippines where I grew up. The big parole for all the politicians was water, power, and phone. That was the big thing. Now, I work in an energy company, and the energy future is changing with solar and wind. And one thing I’m very excited to see that I think will be happening in many places is that power can become… available, and then computing can become available, and then telecommunications can become available in ways that might seem unfamiliar to people in Scandinavia, in the Nordics, in the Baltics, in the Global North, and that there could be completely different ways of doing this where we can learn how these technologies can be made accessible. But I keep coming back to one really important point which is hard to solve. This is actually to get people who, like me, develop digital systems to, number one, know what accessibility is and be familiar with the WCAG or EN 301549 standards. This is surprisingly difficult for those people because I think, as some of the other panels today and throughout the week will point to, much of the private sector development of technology happens in a context where people are completely unaware of legislation even existing, like the idea that there should be a requirement. So this, I think, might have a point on how we can get even more developers, designers, product managers, technologists to take this very seriously. From what I understand, in the Irish version of accessibility rules, it’s a corporate law, so if you don’t follow it, you go to jail. Maybe. Any more comments? We just have to unmute you.
Inmaculada Porrero: Okay, sorry, I’m muted now. So let me tell you, first of all, that the line is not very good in terms of hearing what you were saying, so I hope that what I’m going to tell you really fits into the precise moment of the discussion. But before doing that, and I understood you were asking me how can we reach the knowledge gap, I would like to say that I fully agree with the fact that digital inclusion, accessibility as a precondition for digital inclusion for persons with disabilities should be in all digital developments that are going to be used by people because otherwise, de facto, we are excluding persons with disabilities. And it is not only about having a specific policy setting what the accessibility requirements are but really to mainstream also these requirements in other policies, whether we are talking about procurement, issues have been mentioned here about funds, development funds, well also internal funds. Also when we talk about imports, imports in our all different countries, conditions should be set to ensure that we have a common level playing field so that companies inside our countries that have to comply with accessibility requirements, compete on equal conditions with those that are maybe coming from countries for which they have no requirements but they want to enter countries in which they have requirements. So it’s really important in that context to set coherent requirements across the globe. We said technology is global and this is really essential. Now, we have now, I mean, we have seen an important evolution in the field of accessibility. While several years ago a lot of work was happening, what is accessibility? How do you define, how it relates to the products, how it relates to the services? Now that is pretty clear. We have accessibility requirements for different types of components of digital elements, whether it is the website, the user interfaces, the content, requirements are clear and those requirements are going to be usable also in new digital products and services because they all would have a user interface. interface, maybe a different user interface, but from a functional point of view those requirements would be there. Now, what is the problem now? The problem now is, I would say, or the challenge now is to have that knowledge that there is available about what is accessibility translated into laws and into policies that are enforceable and that are checkable so that it is possible to see whether the products and the services and the infrastructures comply with those requirements. That’s one thing. The other challenge that we have is indeed having the persons, the experts, the engineers, the manufacturers, the service providers, competent on accessibility so that they really have the capacity to implement to that end. I mean, we are undertaking a big effort in Europe to train, to provide training, to raise awareness, but at the end of the day, in order to have this sustainable, we need to turn out to those institutions which are providing training in digital technologies, in ICT. If those institutions, whether it’s university, technical high schools, professional organisations, would not embrace accessibility as part of their curriculum, as part of their efforts to train and upgrade the knowledge and bring competencies to the professionals in the field, it will be very difficult to implement accessibility. And I know it’s a challenge on one hand because you are facing the freedom of universities, for example, to decide what curriculum they have. But there should be something that needs to be done, that can be done, by authorities in order to make sure that the new generations of professionals are equipped with accessibility knowledge and skills and competencies. and that current professionals can upgrade their knowledge in order to be able to deliver on accessibility as it is required. So I hope I have addressed the point that you were concerned with.
Fredrik Matheson: Thank you so much. Thank you. We are at the end of our keynotes and panels and discussions. I’m hugely grateful to the panel. A few things that you should all go off and read is the digital inclusion playbook from cover to cover, essentially, from the UNDP. You should also go and check out the Kenya standard for accessibility. Because just as Ima was talking about, the fact that we can have standards that are conformant with each other, that are in sync with each other, makes sense. Because once you know how to do things in one place, you’ll know how to make it work in other countries as well. And also, this connection of innovation and insight from across the world is incredibly exciting. Because accessibility and inclusiveness is something we need to do globally. So we need to be set up for that. And everyone who works on software, owns, or funds, or helps make software and digital services happen need to take this to heart. Because we are, in many ways, reshaping society. So let’s not do it inclusively. Let’s have a big round of applause for our panel. Thank you. Thank you so much. Thank you. Thank you. Thank you. Thank you.
Asmund Grover Aukrust
Speech speed
125 words per minute
Speech length
1011 words
Speech time
484 seconds
One-third of global population remains offline despite 92% internet coverage, with meaningful connectivity being the key challenge
Explanation
Despite great progress in internet coverage reaching 92% of the planet, one-third of the population is still offline. The biggest challenge is not coverage but meaningful connectivity, which requires addressing barriers like infrastructure gaps, policy uncertainty, inequalities, limited affordability, and digital illiteracy.
Evidence
92% of our planet now has internet coverage, but one third of our population is still offline
Major discussion point
Digital Divide and Barriers to Inclusion
Topics
Development | Infrastructure
Digital inclusion is fundamentally about human rights including information access and freedom of expression
Explanation
Meaningful connectivity is described as being about basic human rights, particularly the right to information and freedom of expression. This frames digital access not as a convenience but as a fundamental right that must be protected and ensured for all.
Evidence
Meaningful connectivity is about basic human rights, including the right of information and freedom of expression
Major discussion point
Digital Inclusion as Human Rights
Topics
Human rights
Digital public infrastructure encourages competition, innovation, and can generate spillover effects across society
Explanation
Digital public infrastructure (DPI) is highlighted as encouraging competition and fostering innovation while building fiscal resilience. It can create positive spillover effects across society, institutions, markets, and businesses, making it a priority for development cooperation.
Evidence
Safe DPI can shape systems, public trust and reduce digital gaps and promote inclusive economic and social development for all. Norway supports digital ID solutions developed in India that is now being rolling out in 26 countries
Major discussion point
Multi-stakeholder Collaboration
Topics
Infrastructure | Development
Failure to close digital gaps will increase inequality and prevent achievement of employment, education, and development goals
Explanation
If digital gaps are not closed, the main consequence will be increased inequality with dramatic effects for individuals and society. This will prevent achieving other societal goals in employment, education, and development, as people unable to access digital systems will be left behind.
Evidence
If people are not able to fill out an application for getting to school, or for higher education, or for employment, or also to register for paying taxes and so on, or creating your own businesses, I mean, you are on your own
Major discussion point
Economic and Business Benefits
Topics
Development | Economic
Maja Brynteson
Speech speed
142 words per minute
Speech length
1585 words
Speech time
669 seconds
Digital exclusion is multidimensional and context-specific, affecting vulnerable groups like elderly, disabled, immigrants, rural communities, and low-income populations
Explanation
Digital exclusion affects multiple overlapping groups including older adults, people with disabilities, immigrants, those with low education, rural communities, young people, and people with low income. The exclusion is multidimensional, meaning people become at risk when several factors overlap, and context-specific, meaning different groups face different challenges.
Evidence
An older adult living in a rural area with limited income is likely to face more barriers than someone of the same age that is affluent and living in a well-connected urban area
Major discussion point
Digital Divide and Barriers to Inclusion
Topics
Development | Human rights
Disagreed with
– Malin Rygg
Disagreed on
Approach to supporting vulnerable populations – targeted vs. universal design
Access barriers include infrastructure and device affordability, while capability barriers involve digital skills, literacy, and trust issues
Explanation
Digital inclusion barriers are divided into two main categories: access barriers (physical means like stable internet and devices) and capability barriers (skills, confidence, literacy, language, domain knowledge, trust, and security). Both types often overlap and prevent full participation in digital societies.
Evidence
Access barriers are about having the physical means to participate in digital life, like access to a stable internet connection or owning a digital device. Capability barriers relate to the skills and confidence needed to use digital tools
Major discussion point
Digital Divide and Barriers to Inclusion
Topics
Development | Infrastructure
The Nordic-Baltic paradox shows that highly digitalized societies can still deepen digital divides as expectations rise
Explanation
Despite Nordic and Baltic countries being among the most digitalized in Europe, significant disparities remain in access and ability. As societies become more digital, the risk of deepening the digital divide increases, creating a paradox where advancement can lead to greater exclusion.
Evidence
Finland, Denmark and Sweden are among the top performers in Europe when it comes to basic digital skills, but there are still significant disparities in both access and ability. Broadband coverage remains uneven, especially in rural and remote areas
Major discussion point
Digital Divide and Barriers to Inclusion
Topics
Development | Infrastructure
Lack of shared understanding of digital inclusion across regions makes coordination and progress measurement difficult
Explanation
One key challenge in the Nordic and Baltic region is the absence of a shared understanding of what digital inclusion actually means. This makes it harder to coordinate efforts and measure progress consistently, especially when different stakeholders may not be discussing the same concepts.
Evidence
There’s no shared understanding of what digital inclusion actually means across the region. This makes it harder to coordinate efforts and measure progress consistently
Major discussion point
Implementation Challenges
Topics
Legal and regulatory | Development
Maintaining analog services and alternative options is essential for those who cannot or choose not to be digital
Explanation
It’s important to remember that not everyone can or wants to be digital. Maintaining analog services and alternative options is essential to ensure that everyone, regardless of their digital ability or willingness, can access needed services and participate in society.
Evidence
Not everyone can or wants to be digital. Maintaining analog services and alternative options is essential to ensure that everyone, regardless of their digital ability or willingness, can access the services they need
Major discussion point
Implementation Challenges
Topics
Development | Human rights
Disagreed with
– Malin Rygg
Disagreed on
Role of analog alternatives in digital societies
Yu Ping Chan
Speech speed
190 words per minute
Speech length
982 words
Speech time
309 seconds
Global inequalities exist between countries, with only 10% of AI economic value projected to accrue to Global South countries except China
Explanation
Digital divides exist not just within societies but between countries. The future AI revolution could exacerbate existing inequalities, with projections showing only 10% of global AI economic value will benefit Global South countries (excluding China), while over 95% of top AI talent is concentrated in just six universities in the US and China.
Evidence
Only 10% of the global economic value generated by AI in 2030 will accrue to the global South countries, except for China. Over 95% of top AI talent is concentrated in only six research universities in the US and China
Major discussion point
Digital Divide and Barriers to Inclusion
Topics
Development | Economic
Effective collaboration requires whole-of-society approach with private sector, government, and people-centered focus
Explanation
Digital inclusion requires comprehensive collaboration across all sectors of society. This includes private sector working with government while maintaining a people-centered focus to ensure effective delivery of digital services and infrastructure.
Evidence
We need a whole-of-society approach, where it’s the private sector also working with government, with a people-centered focus, to ensure this type of delivery
Major discussion point
Multi-stakeholder Collaboration
Topics
Development | Legal and regulatory
Local ecosystems and capacity building in developing countries are crucial for co-creating digital and AI futures
Explanation
Building capacity in developing countries is essential for them to become co-creators rather than just consumers in the digital and AI future. This requires skills development, investments, global capacity building, upskilling, and additional resources dedicated to these efforts worldwide.
Evidence
We need to build capacity in developing countries to be co-creators in this digital and AI future. That requires skills, investments, global capacity building and upskilling, and as well as additional resources
Major discussion point
Multi-stakeholder Collaboration
Topics
Development | Capacity development
Digital transformation is fundamental to development and achieving Sustainable Development Goals
Explanation
Digitalization and digital transformation are integral parts of development itself. Digital and AI technologies have the potential to accelerate development and achievement of the SDGs, making them essential tools for global development efforts rather than separate initiatives.
Evidence
If we see the potential of digital and AI, we need to recognize its potential to be an accelerator of development and achievement of the SDGs itself
Major discussion point
Economic and Business Benefits
Topics
Development | Sustainable development
Malin Rygg
Speech speed
136 words per minute
Speech length
1586 words
Speech time
695 seconds
Digital exclusion reinforces social exclusion in education, employment, and democratic participation
Explanation
Digital exclusion doesn’t exist in isolation but reinforces broader social exclusion across multiple areas of life. This includes barriers to education, employment opportunities, and democratic participation, making digital access essential for full social participation.
Evidence
Digital exclusion reinforces social exclusion. It reinforces it in education, employment, but also for democratic participation and freedom of speech
Major discussion point
Digital Inclusion as Human Rights
Topics
Human rights | Development
Digital services are now intertwined with basic rights like education, work, and civic participation, making exclusion a rights violation
Explanation
Digital services have become so integrated with essential life functions that exclusion from them constitutes a violation of basic rights. When analog services are removed and digital becomes mandatory for education, work, and civic participation, digital exclusion becomes a human rights issue rather than just a convenience matter.
Evidence
Children with dyslexia or visual impairment are sometimes so digitally excluded that they don’t have the equal right to education that other children have in Norway’s digital education system
Major discussion point
Digital Inclusion as Human Rights
Topics
Human rights | Online education
Disagreed with
– Maja Brynteson
Disagreed on
Role of analog alternatives in digital societies
1.3 billion people worldwide live with disabilities and need accessible digital systems for full participation
Explanation
With 1.3 billion people (one in six globally) living with disabilities, including physical, cognitive, and sensory impairments, accessible design is foundational to human dignity and equality. Society must be designed to include everyone, making accessibility essential for full digital participation.
Evidence
1.3 billion people worldwide live with a disability. That is one in six of us. It’s a diverse group that includes people with physical, cognitive and sensory impairments or other health conditions
Major discussion point
Digital Inclusion as Human Rights
Topics
Human rights | Rights of persons with disabilities
Universal design and inclusive standards must be built into regulatory frameworks from the start
Explanation
Rather than retrofitting accessibility, inclusive design must be built into systems from the beginning. This requires regulatory frameworks that mandate universal design and inclusive standards, ensuring accessibility is not left to chance but is systematically designed and implemented.
Evidence
Kenya’s commitment to standards from the start, enabling accessibility and inclusive innovation from day one, while the north has been stumbling through with a ‘design now fix later’ approach
Major discussion point
Regulatory Frameworks and Standards
Topics
Legal and regulatory | Rights of persons with disabilities
Africa can leapfrog ahead by learning from others’ mistakes and building accessibility standards from day one
Explanation
The Global South, particularly Africa, doesn’t need to simply catch up but can leapfrog ahead by learning from the mistakes of more developed regions. With mobile-first approaches and young populations, they can choose to build better systems with accessibility from the beginning rather than retrofitting later.
Evidence
In Kenya, the norm is mobile first, and with a young population, 70% under the age of 30. What stood out is their commitment to standards from the start, enabling accessibility and inclusive innovation from day one
Major discussion point
Innovation and Local Solutions
Topics
Development | Digital standards
Irene Mbari-Kirika
Speech speed
144 words per minute
Speech length
1884 words
Speech time
780 seconds
Kenya’s ICT Accessibility Act of 2025 institutionalizes digital inclusion and sets an example for Africa
Explanation
Kenya has passed landmark legislation that institutionalizes digital inclusion, aligning with ICT accessibility standards and making compliance mandatory rather than optional. This law sets a strong example for the continent and represents significant progress in legal frameworks for accessibility.
Evidence
Kenya recently passed the landmark law for persons with disability Accessibility Act of 2025. This law institutionalizes digital inclusion, aligning with the ICT accessibility standard and setting a strong example for the continent
Major discussion point
Regulatory Frameworks and Standards
Topics
Legal and regulatory | Rights of persons with disabilities
African innovators are developing assistive technologies but need support for market access, financing, and scaling solutions
Explanation
African innovators are creating accessibility solutions and assistive technologies, but they lack pathways to bring products to market. They need support with design, packaging, financing, and scaling their innovations to reach broader audiences and create sustainable businesses.
Evidence
At Inclusive Africa, we get about 100 innovations from various African countries annually. Example of SignVerse, an AI-based sign language interpretation solution developed to bridge communication gaps, now expanding across African countries
Major discussion point
Innovation and Local Solutions
Topics
Development | Capacity development
The global assistive technology market projected to reach $32 billion by 2030 presents opportunities for African creators and manufacturers
Explanation
The assistive technology market represents a significant economic opportunity worth $32 billion by 2030. Africa should position itself not just as a consumer but as a creator, manufacturer, and global supplier of accessible technologies designed and built on the continent for worldwide use.
Evidence
The global assistive technology market is projected to reach 32 billion by 2030. Africa must not only be a consumer, we must be a creator, a manufacturer and a global supplier of accessible technologies
Major discussion point
Innovation and Local Solutions
Topics
Economic | Development
Digital accessibility represents untapped market potential, with 15% of global population living with disabilities
Explanation
Making digital products accessible is a business opportunity to capture the 15% market share of people with disabilities that remains largely untapped. When products are designed with accessibility from the start, they benefit users with various abilities and create independent access to services like banking.
Evidence
15% of the global population lives with some form of disability. If banking products are not accessible, blind users must rely on others to transact, compromising safety, security, independence and dignity
Major discussion point
Economic and Business Benefits
Topics
Economic | Rights of persons with disabilities
Dan Sjoblom
Speech speed
141 words per minute
Speech length
926 words
Speech time
391 seconds
Civic society organizations, libraries, and trusted community institutions are key enablers for reaching excluded populations
Explanation
Trusted institutions like libraries and civic society organizations play a vital role in digital inclusion by providing not just access but also guidance, training, and human support. These organizations are essential for reaching people who might otherwise be left behind in digital transformation.
Evidence
Libraries, civil society organizations, and municipal citizen services all play a vital role in reaching those who might otherwise be left behind. In Sweden’s Digital Today program, close to 400 organizations come together to create platform material and meet people with their trusted representatives
Major discussion point
Multi-stakeholder Collaboration
Topics
Development | Capacity development
Working with trusted representatives and peer-to-peer learning programs can effectively address digital exclusion
Explanation
Effective digital inclusion programs work through trusted community representatives and peer-to-peer learning approaches. Sweden’s experience with sub-Saharan African countries through the IPRES program demonstrates how collaborative learning and sharing can address development challenges in digital inclusion.
Evidence
Sweden runs IPRES program with 25 sub-Saharan national countries, using peer-to-peer learning and sharing development issues projects. Digital Today program works with trusted institutions and civic society associations
Major discussion point
Multi-stakeholder Collaboration
Topics
Development | Capacity development
Cross-cutting nature of digital policies requires collaboration across government ministries and sectors
Explanation
Digital policy has become cross-cutting, affecting every minister’s portfolio and requiring collaboration at local, national, and international levels. The complexity of digital transformation means that traditional sector-based approaches are insufficient, necessitating coordinated government-wide responses.
Evidence
Being a telecom regulator has changed dramatically – we are drifting from telecoms into ICT and into digital. Every minister has digital in his or her portfolio, making collaboration essential
Major discussion point
Implementation Challenges
Topics
Legal and regulatory | Development
Inmaculada Porrero
Speech speed
140 words per minute
Speech length
1175 words
Speech time
501 seconds
Clear legislation with obligations for public and private sectors, combined with technical standards and enforcement mechanisms, is essential
Explanation
Effective digital accessibility requires concrete policies reflected in legislation with clear obligations for both public and private sectors. This must be combined with technical standards and enforcement mechanisms to ensure compliance and real progress toward digital inclusion.
Evidence
What really worked in Europe is legislation with clear obligations, clear deadlines, and clear enforcement mechanisms, combined with technical standards like EN 301549
Major discussion point
Regulatory Frameworks and Standards
Topics
Legal and regulatory | Rights of persons with disabilities
European approach involved building stakeholder awareness, creating specific policies, and using existing international standards rather than starting from scratch
Explanation
Europe’s successful accessibility strategy involved first building stakeholder knowledge and awareness, then creating specific policies that address digital accessibility concretely. Rather than reinventing solutions, they partnered with advanced countries like the US and built on existing standards.
Evidence
Europe partnered with the US Access Board to have coherent standards. The European standard EN 301549 is now being used in Canada, Australia, and has been the basis for developments in Africa, particularly Kenya
Major discussion point
Regulatory Frameworks and Standards
Topics
Legal and regulatory | Digital standards
Fredrik Matheson
Speech speed
162 words per minute
Speech length
3242 words
Speech time
1198 seconds
Need for digital literacy among developers, designers, and decision-makers who often lack awareness of accessibility requirements
Explanation
A critical challenge is getting people who develop digital systems to understand accessibility requirements and standards like WCAG or EN 301549. Many private sector technology developers work without awareness that accessibility legislation even exists, making education and awareness crucial.
Evidence
Much of private sector technology development happens where people are completely unaware of legislation even existing, like the idea that there should be accessibility requirements
Major discussion point
Implementation Challenges
Topics
Capacity development | Rights of persons with disabilities
Agreements
Agreement points
Digital inclusion is a fundamental human rights issue
Speakers
– Asmund Grover Aukrust
– Malin Rygg
– Yu Ping Chan
Arguments
Meaningful connectivity is about basic human rights, including the right of information and freedom of expression
Digital exclusion reinforces social exclusion. It reinforces it in education, employment, but also for democratic participation and freedom of speech
If you think about other parts of human rights, the right to information, the right to education, the right to employment in some cases, the right to the highest level of physical and mental health, all of this is inextricably tied up to digital technologies these days
Summary
All three speakers frame digital inclusion not as a convenience or service issue, but as a fundamental human rights matter that affects access to information, freedom of expression, education, employment, and democratic participation.
Topics
Human rights | Development
Multi-stakeholder collaboration is essential for effective digital inclusion
Speakers
– Yu Ping Chan
– Dan Sjoblom
– Asmund Grover Aukrust
Arguments
We need a whole-of-society approach, where it’s the private sector also working with government, with a people-centered focus, to ensure this type of delivery
Libraries, civil society organizations, and municipal citizen services all play a vital role in reaching those who might otherwise be left behind
Digital public infrastructure encourages competition and fosters innovation and fiscal resilience, and it can generate spillover effects across society, institutions, markets and businesses
Summary
Speakers agree that addressing digital inclusion requires coordinated efforts across government, private sector, and civil society organizations, with each playing crucial complementary roles.
Topics
Development | Legal and regulatory
Failure to address digital divides will increase inequality and have severe societal consequences
Speakers
– Asmund Grover Aukrust
– Maja Brynteson
– Yu Ping Chan
Arguments
If people are not able to fill out an application for getting to school, or for higher education, or for employment, or also to register for paying taxes and so on, or creating your own businesses, I mean, you are on your own
Digital exclusion doesn’t just reflect existing inequalities, it can actually deepen them
The future AI revolution could exacerbate already existing inequalities
Summary
All speakers warn that unaddressed digital divides will not only perpetuate but actively worsen existing inequalities, creating cascading effects across education, employment, and social participation.
Topics
Development | Economic | Human rights
Standards and regulatory frameworks are crucial for ensuring accessibility
Speakers
– Inmaculada Porrero
– Irene Mbari-Kirika
– Dan Sjoblom
Arguments
What really worked in Europe is legislation with clear obligations, clear deadlines, and clear enforcement mechanisms, combined with technical standards like EN 301549
Kenya recently passed the landmark law for persons with disability Accessibility Act of 2025. This law institutionalizes digital inclusion, aligning with the ICT accessibility standard
The importance of standard has been mentioned. And I think that’s something we need to continue to have very high up on the agenda
Summary
Speakers agree that clear legal frameworks combined with technical standards are essential for making real progress on digital accessibility and inclusion.
Topics
Legal and regulatory | Rights of persons with disabilities
Similar viewpoints
Both speakers emphasize the significant size of the disability community globally and the importance of designing accessible systems from the start rather than retrofitting, viewing this as both a rights issue and business opportunity.
Speakers
– Malin Rygg
– Irene Mbari-Kirika
Arguments
1.3 billion people worldwide live with a disability. That is one in six of us
15% of the global population lives with some form of disability
Topics
Rights of persons with disabilities | Economic
Both speakers recognize that digital exclusion operates at multiple levels – within societies affecting vulnerable groups, and between countries creating global inequalities that could be exacerbated by emerging technologies like AI.
Speakers
– Maja Brynteson
– Yu Ping Chan
Arguments
Digital exclusion is multidimensional and context-specific, affecting vulnerable groups like elderly, disabled, immigrants, rural communities, and low-income populations
Global inequalities exist between countries, with only 10% of AI economic value projected to accrue to Global South countries except China
Topics
Development | Economic
Both speakers see the Global South, particularly Africa, as having the opportunity to build better, more inclusive digital systems from the ground up rather than retrofitting accessibility later, with local innovation being key to this process.
Speakers
– Malin Rygg
– Irene Mbari-Kirika
Arguments
Africa can leapfrog ahead by learning from others’ mistakes and building accessibility standards from day one
African innovators are developing assistive technologies but need support for market access, financing, and scaling solutions
Topics
Development | Innovation and Local Solutions
Unexpected consensus
Business and economic benefits of accessibility
Speakers
– Irene Mbari-Kirika
– Asmund Grover Aukrust
– Yu Ping Chan
Arguments
The global assistive technology market is projected to reach 32 billion by 2030. Africa must not only be a consumer, we must be a creator, a manufacturer and a global supplier of accessible technologies
Digital public infrastructure encourages competition and fosters innovation and fiscal resilience, and it can generate spillover effects across society, institutions, markets and businesses
If we see the potential of digital and AI, we need to recognize its potential to be an accelerator of development and achievement of the SDGs itself
Explanation
Unexpectedly, speakers from different sectors (advocacy, government, UN) all emphasized the economic and business case for digital inclusion, moving beyond just moral or rights-based arguments to highlight market opportunities and economic development benefits.
Topics
Economic | Development
Need for analog alternatives and choice
Speakers
– Maja Brynteson
– Dan Sjoblom
Arguments
Not everyone can or wants to be digital. Maintaining analog services and alternative options is essential to ensure that everyone, regardless of their digital ability or willingness, can access the services they need
Working with trusted representatives and peer-to-peer learning programs can effectively address digital exclusion
Explanation
Surprisingly, even in a discussion focused on digital inclusion, there was consensus that maintaining non-digital alternatives is essential, recognizing that full digitalization may not be appropriate or desired for everyone.
Topics
Development | Human rights
Overall assessment
Summary
The speakers demonstrated remarkably high consensus across multiple dimensions of digital inclusion, agreeing on its status as a human rights issue, the need for multi-stakeholder collaboration, the importance of regulatory frameworks and standards, and the severe consequences of inaction. They also shared views on the economic opportunities presented by accessibility and the need to maintain alternatives for those who cannot or choose not to engage digitally.
Consensus level
Very high consensus with strong alignment on fundamental principles and approaches. This level of agreement across speakers from different sectors (government, UN, academia, advocacy, regulation) suggests a mature understanding of digital inclusion challenges and broad support for comprehensive, rights-based solutions. The consensus implies strong potential for coordinated global action on digital inclusion policies and initiatives.
Differences
Different viewpoints
Approach to supporting vulnerable populations – targeted vs. universal design
Speakers
– Maja Brynteson
– Malin Rygg
Arguments
Digital exclusion is multidimensional and context-specific, affecting vulnerable groups like elderly, disabled, immigrants, rural communities, and low-income populations
Digital services are now intertwined with basic rights like education, work, and civic participation, making exclusion a rights violation
Summary
Maja advocates for identifying specific vulnerable groups and their particular challenges, while Malin emphasizes moving away from targeting specific groups toward universal human rights-based approaches that benefit everyone
Topics
Human rights | Development
Role of analog alternatives in digital societies
Speakers
– Maja Brynteson
– Malin Rygg
Arguments
Maintaining analog services and alternative options is essential for those who cannot or choose not to be digital
Digital services are now intertwined with basic rights like education, work, and civic participation, making exclusion a rights violation
Summary
Maja argues for maintaining analog alternatives for those who cannot or choose not to be digital, while Malin’s framing suggests digital access is so fundamental to rights that analog alternatives may be insufficient
Topics
Human rights | Development
Unexpected differences
Framing of inclusion – paternalistic vs. empowerment approach
Speakers
– Malin Rygg
– Other speakers
Arguments
Digital services are now intertwined with basic rights like education, work, and civic participation, making exclusion a rights violation
Various arguments about including vulnerable groups
Explanation
Malin unexpectedly challenged the entire framing used by other speakers, arguing that talking about ‘including’ vulnerable groups is paternalistic and that the focus should be on people wanting to contribute rather than needing to be included
Topics
Human rights | Development
Overall assessment
Summary
The discussion showed remarkable consensus on core issues with only subtle disagreements on approaches and framing. Main areas of difference were around targeting specific groups vs. universal approaches, maintaining analog alternatives vs. digital-first strategies, and gradual vs. immediate implementation of universal standards.
Disagreement level
Low level of disagreement with high consensus on fundamental goals. The disagreements were primarily methodological rather than philosophical, suggesting strong potential for collaborative solutions. The unexpected challenge to paternalistic framing was constructive and helped refine the discussion toward more empowering approaches.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers emphasize the significant size of the disability community globally and the importance of designing accessible systems from the start rather than retrofitting, viewing this as both a rights issue and business opportunity.
Speakers
– Malin Rygg
– Irene Mbari-Kirika
Arguments
1.3 billion people worldwide live with a disability. That is one in six of us
15% of the global population lives with some form of disability
Topics
Rights of persons with disabilities | Economic
Both speakers recognize that digital exclusion operates at multiple levels – within societies affecting vulnerable groups, and between countries creating global inequalities that could be exacerbated by emerging technologies like AI.
Speakers
– Maja Brynteson
– Yu Ping Chan
Arguments
Digital exclusion is multidimensional and context-specific, affecting vulnerable groups like elderly, disabled, immigrants, rural communities, and low-income populations
Global inequalities exist between countries, with only 10% of AI economic value projected to accrue to Global South countries except China
Topics
Development | Economic
Both speakers see the Global South, particularly Africa, as having the opportunity to build better, more inclusive digital systems from the ground up rather than retrofitting accessibility later, with local innovation being key to this process.
Speakers
– Malin Rygg
– Irene Mbari-Kirika
Arguments
Africa can leapfrog ahead by learning from others’ mistakes and building accessibility standards from day one
African innovators are developing assistive technologies but need support for market access, financing, and scaling solutions
Topics
Development | Innovation and Local Solutions
Takeaways
Key takeaways
Digital inclusion must be framed as a human rights issue rather than just a convenience or service delivery matter, as digital services are now essential for education, employment, healthcare, and civic participation
The ‘Nordic-Baltic paradox’ demonstrates that even highly digitalized societies can deepen digital divides as expectations rise and analog alternatives disappear
Meaningful connectivity goes beyond infrastructure coverage to address usage barriers including affordability, digital literacy, trust, safety, and accessibility
Digital exclusion is multidimensional and context-specific, often affecting overlapping vulnerable groups including elderly, disabled, immigrants, rural communities, and low-income populations
Effective digital inclusion requires a three-dimensional approach: connectivity (infrastructure and devices), accessibility (universal design and standards), and digital skills (literacy and confidence)
Clear legislation with enforcement mechanisms, combined with technical standards and multi-stakeholder collaboration, is essential for sustainable progress
Local innovation and community-driven solutions, particularly from the Global South, can leapfrog traditional development approaches and create scalable accessibility solutions
Digital accessibility represents significant untapped economic potential, with 15% of the global population living with disabilities representing an underserved market
The private sector development community often lacks awareness of accessibility requirements and legislation, creating a critical knowledge gap that must be addressed through education and training
Resolutions and action items
Participants were invited to join year-round working groups launched by the Inclusive Africa Conference to sustain momentum and drive measurable progress
UNDP’s digital inclusion playbook and Kenya’s ICT accessibility standard were recommended as resources for global implementation
Need to integrate accessibility training into university curricula and professional development programs for ICT professionals
Recommendation to mainstream accessibility requirements across all digital policies including procurement, funding, and import regulations
Call for universal digital accessibility standards that apply equally across developing and developed countries
Emphasis on building local capacity and ecosystems in developing countries for co-creating digital and AI solutions
Unresolved issues
How to effectively reach and train the global community of developers, designers, and product managers who remain unaware of accessibility requirements and legislation
Balancing the push toward digitalization with maintaining analog services for those who cannot or choose not to be digital
Addressing the fundamental inequality between countries in AI development and economic benefits, with 95% of top AI talent concentrated in six universities in the US and China
Resolving the tension between university academic freedom and the need to mandate accessibility education in ICT curricula
How to scale successful local innovations and community-driven solutions to broader regional or global implementation
Managing the cross-cutting nature of digital policies across multiple government ministries and sectors
Addressing the challenge that digital exclusion often reinforces existing social inequalities rather than solving them
Suggested compromises
Adopting a ‘common barriers, common solutions’ approach that addresses shared challenges across different vulnerable groups rather than targeting specific populations
Using existing international standards (like WCAG and EN 301549) as building blocks rather than creating entirely new frameworks from scratch
Implementing a gap model that works simultaneously at societal level (infrastructure, standards, regulations) and individual level (devices, assistive technologies, skills training)
Maintaining both digital-first approaches for efficiency while preserving analog alternatives for inclusion
Leveraging trusted community institutions like libraries and civic organizations as intermediaries to reach digitally excluded populations
Focusing on digital public infrastructure that can serve as a foundation for multiple services rather than creating isolated solutions
Thought provoking comments
The more digital our societies become, the greater the risk of deepening the digital divide… So here is the Nordic and Baltic paradox. The more digital our societies become, the greater the risk of deepening the digital divide.
Speaker
Maja Brynteson
Reason
This comment reframes the conventional narrative about digital progress. Instead of viewing digitalization as inherently positive, it highlights how advancement can paradoxically increase exclusion. This challenges the assumption that technological progress automatically benefits everyone equally.
Impact
This concept became a central theme that other speakers referenced throughout the discussion. It shifted the conversation from celebrating digital achievements to critically examining their unintended consequences, leading to deeper analysis of who gets left behind in highly digitalized societies.
Digital accessibility therefore is not a sentimental issue, it is a sound investment and a strategic opportunity for growth and innovation… Africa must not only be a consumer, we must be a creator, a manufacturer and a global supplier of accessible technologies, designed and built on the continent by Africans for the world.
Speaker
Irene Mbari-Kirika
Reason
This comment powerfully reframes accessibility from charity to economic opportunity, challenging paternalistic approaches to development. It positions Africa not as a recipient of solutions but as an innovator and global supplier, fundamentally shifting the power dynamic in the conversation.
Impact
This perspective influenced subsequent speakers to move away from ‘helping’ language toward recognizing untapped potential and market opportunities. It elevated the discussion from inclusion as moral imperative to inclusion as economic necessity and competitive advantage.
Digital inclusion is not about making room at the table. It is about building a table where everyone has a seat and a voice.
Speaker
Irene Mbari-Kirika
Reason
This metaphor fundamentally challenges the traditional inclusion paradigm. Rather than asking existing systems to accommodate more people, it calls for redesigning systems from the ground up to be inherently inclusive.
Impact
This comment crystallized a key tension in the discussion and influenced other speakers to critique paternalistic approaches. Malin Rygg later echoed this sentiment, noting how inclusion language can be ‘paternalistic’ and emphasizing that excluded groups want to contribute, not just be included.
By multidimensional we mean that people often become at risk when several factors overlap. For example, an older adult living in a rural area with limited income is likely to face more barriers than someone of the same age that is affluent and living in a well-connected urban area.
Speaker
Maja Brynteson
Reason
This introduces the crucial concept of intersectionality to digital exclusion, moving beyond single-factor analysis to understand how multiple disadvantages compound. This adds sophisticated nuance to understanding exclusion patterns.
Impact
This framework helped other speakers move beyond simple categorizations of ‘at-risk groups’ to understand the complex, overlapping nature of digital barriers. It influenced the discussion toward more nuanced policy solutions that address multiple factors simultaneously.
When for instance, it’s projected that only 10% of the global economic value generated by AI in 2030 will accrue to the global South countries, except for China… you think about the fact that perhaps the global opportunity that is posed by AI will fundamentally leave behind many of these developing countries
Speaker
Yu Ping Chan
Reason
This comment introduces hard data about global inequality in AI benefits, shifting the discussion from individual-level exclusion to systemic global exclusion. It highlights how current trajectories will entrench rather than reduce global digital divides.
Impact
This broadened the scope of the discussion from national digital inclusion policies to global structural inequalities. It added urgency to the conversation by showing how emerging technologies like AI could dramatically worsen existing divides if not addressed proactively.
It’s very paternalistic kind of viewpoint… These are people that are young people, they are people able to work… These are very important contributions… We actually have to kind of see that this is the potential going forward.
Speaker
Malin Rygg
Reason
This comment directly challenges the framing used throughout the discussion, calling out the paternalistic language of ‘inclusion’ and reframing excluded groups as contributors rather than beneficiaries. It’s a meta-critique of how the conversation itself was being conducted.
Impact
This self-reflective moment caused the discussion to become more conscious of its own language and assumptions. It reinforced Irene’s earlier point about building new tables rather than making room at existing ones, and influenced speakers to emphasize contribution and potential rather than need and vulnerability.
Overall assessment
These key comments fundamentally transformed what could have been a conventional discussion about digital inclusion policies into a more sophisticated examination of power dynamics, economic opportunities, and systemic inequalities. The ‘Nordic paradox’ concept established that progress itself can create exclusion, while Irene’s economic framing and table-building metaphor challenged charity-based approaches. The intersectionality framework added analytical depth, and Yu Ping’s global inequality data expanded the scope beyond national boundaries. Malin’s critique of paternalistic language created a moment of self-reflection that elevated the entire discussion. Together, these comments shifted the conversation from ‘how do we help the excluded’ to ‘how do we redesign systems to harness everyone’s potential’ – a fundamental reframing that made the discussion more empowering and strategically focused.
Follow-up questions
How can we ensure that digital solutions change policy as fast as digitalization is moving forward?
Speaker
Asmund Grover Aukrust
Explanation
The minister emphasized that digital solutions are changing constantly, requiring policy frameworks to adapt at the same pace to remain effective
How can we help African innovators with data sets, design, packaging, and bringing products to market?
Speaker
Irene Mbari-Kirika
Explanation
She identified specific gaps in supporting African developers who create great accessibility solutions but lack resources for market entry and scaling
How can we scale Kenya’s ICT accessibility standard across other African countries?
Speaker
Irene Mbari-Kirika
Explanation
She mentioned current work to expand Kenya’s accessibility standards to other African countries, which requires further development and coordination
How can we address the projected inequality where only 10% of AI’s global economic value will accrue to Global South countries (except China)?
Speaker
Yu Ping Chan
Explanation
This represents a critical challenge for ensuring AI development doesn’t exacerbate existing digital divides between developed and developing nations
How can we build local ecosystems and capacity in developing countries to be co-creators in the digital and AI future?
Speaker
Yu Ping Chan
Explanation
This addresses the need for developing countries to move beyond being consumers to becoming creators and manufacturers of digital solutions
How can we integrate accessibility requirements into university curricula and professional training programs?
Speaker
Inmaculada Porrero
Explanation
She identified the challenge of ensuring new generations of ICT professionals are equipped with accessibility knowledge and skills from the start
How can we create coherent accessibility requirements across the globe to ensure fair competition?
Speaker
Inmaculada Porrero
Explanation
This addresses the need for international coordination to prevent unfair competition between companies with different accessibility compliance requirements
How can we develop a shared understanding of what digital inclusion means across the Nordic and Baltic region?
Speaker
Maja Brynteson
Explanation
She identified this as a key challenge that makes it harder to coordinate efforts and measure progress consistently across the region
How effective will the common barriers, common solutions approach be compared to targeting specific groups?
Speaker
Maja Brynteson
Explanation
Some Nordic countries have shifted from targeting specific vulnerable groups to addressing common barriers, and the effectiveness of this approach needs evaluation
How can we better involve users in the design of digital solutions, especially those at risk of exclusion?
Speaker
Maja Brynteson
Explanation
She identified lack of user involvement as a key issue, particularly for those most affected by potential exclusion
How can we make legislation more usable and understandable for citizens?
Speaker
Fredrik Matheson
Explanation
He mentioned experiments in Norway to make legislation more usable, recognizing that complex legal frameworks create barriers to understanding digital systems
How can we reach universal design standards that are truly global rather than regional?
Speaker
Dan Sjoblom
Explanation
He expressed hope for extending European accessibility standards globally through UN systems to achieve truly universal design
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
High Level Session 4 Securing Child Safety in the Age of the Algorithms
Day 0 Event #222 IGF Support Association
Opening Ceremony
IGF Intersessional Work Session Bpf Cybersecurity
Open Forum #5 Bridging Digital Divide for Inclusive Growth Under the Gdc
Open Forum #16 AI and Disinformation Countering the Threats to Democratic Dialogue
Open Forum #16 AI and Disinformation Countering the Threats to Democratic Dialogue
Session at a glance
Summary
This discussion focused on the dual role of artificial intelligence in both creating and combating disinformation, examining threats to democratic dialogue and potential solutions. The panel was organized by the Council of Europe as part of an Internet Governance Forum open forum, bringing together experts from policy, technology, and civil society sectors.
David Caswell opened by explaining how AI has fundamentally changed information creation from an artisanal to an automated process, enabling the generation of entire narratives across multiple platforms and extended timeframes rather than just individual fake artifacts. He highlighted emerging risks including automated personalized persuasion at scale and the embedding of deep biases into AI training models, while also noting opportunities for more systematic and accessible civic information. Chine Labbé from NewsGuard presented concrete evidence of AI’s impact on disinformation, revealing that deepfakes in the Russia-Ukraine conflict increased dramatically from one case in the first year to sixteen sophisticated examples in the third year. She described how malicious actors now create entire networks of AI-generated fake news sites, with over 1,200 such sites identified globally, and demonstrated how AI chatbots frequently repeat false claims as authoritative facts approximately 26% of the time.
Maria Nordström discussed Sweden’s policy approach, emphasizing the importance of the Council of Europe’s AI Framework Convention as the first legally binding global treaty on AI and human rights. She highlighted the challenge of balancing public education about AI risks without further eroding trust in information systems. Olha Petriv shared Ukraine’s practical experience, describing their bottom-up approach including industry self-regulation through codes of conduct and the critical importance of teaching children AI literacy and critical thinking skills, particularly given their vulnerability to disinformation campaigns. The discussion concluded with actionable recommendations including preserving primary source journalism, developing AI literacy programs, creating certification systems for AI chatbots, and potentially establishing public service AI systems trained on reliable data sources.
Keypoints
## Major Discussion Points:
– **AI’s dual role in disinformation**: The discussion explored how AI both amplifies disinformation threats (through deepfakes, automated content creation, and AI-generated fake news sites) while also offering potential solutions for detection and fact-checking
– **Scale and automation challenges**: Speakers emphasized how AI has fundamentally changed the disinformation landscape by enabling malicious actors to create sophisticated false content at unprecedented scale and low cost, with examples including over 1,200 AI-generated fake news sites and deepfakes becoming increasingly believable
– **Systemic failures in current approaches**: The panel critiqued existing disinformation countermeasures as largely ineffective due to scale mismatches, political polarization, and the focus on individual artifacts rather than systematic solutions
– **Education and literacy as key solutions**: Multiple speakers advocated for AI literacy programs, particularly targeting children and teachers, with innovative approaches like using AI chatbots to teach critical thinking about AI-generated content
– **Regulatory and governance frameworks**: Discussion of legal instruments like the Council of Europe’s AI Framework Convention and the need for industry self-regulation, certification systems, and consumer empowerment to incentivize truthful AI systems
## Overall Purpose:
The discussion aimed to examine the complex relationship between AI and disinformation, analyzing both the threats AI poses to democratic dialogue and the opportunities it presents for combating false information. The session sought to identify practical solutions and policy approaches for maintaining information integrity in an AI-driven world.
## Overall Tone:
The discussion maintained a serious, analytical tone throughout, reflecting the gravity of the subject matter. While speakers acknowledged significant challenges and risks, the tone remained constructive and solution-oriented rather than alarmist. There was a notable shift toward cautious optimism in the latter portions, with speakers emphasizing actionable solutions like education, regulation, and technological safeguards, concluding with a call to transform AI “from a weapon to a force for good.”
Speakers
– **Irena GrÃkova** – Head of the Democratic Institutions and Freedoms Department at the Council of Europe, moderator of the panel
– **David Caswell** – Product developer, consultant and researcher of computational and automated forms of journalism; expert for the Council of Europe and member of expert committee for guidance note on implications of generative AI on freedom of expression
– **Chine Labbé** – Senior Vice President and Managing Editor for Europe and Canada at NewsGuard (company that tackles disinformation online)
– **Maria Nordstrom** – PhD, Head of Section Digital Government Division at the Ministry of Finance in Sweden; works on national AI policy at the Government Office of Sweden and international AI policy; participated in negotiations of the EU AI Act and the Council of Europe’s Framework Convention on Artificial Intelligence
– **Olha Petriv** – Artificial intelligence lawyer at the Centre for Democracy and the Rule of Law in Ukraine; expert on artificial intelligence who played active role in discussion and amending negotiations of the Framework Convention on Artificial Intelligence of the Council of Europe
– **Mikko Salo** – Representative of Faktabarida, a digital information literacy service in Finland
– **Audience** – Various unidentified audience members who asked questions
**Additional speakers:**
– **Frances** – From YouthDIG, the European Youth IGF
– **Jun Baek** – From Youth of Privacy, a youth-led privacy and cybersecurity education organization
Full session report
# AI and Disinformation: Navigating Threats and Opportunities for Democratic Dialogue
## Executive Summary
This comprehensive discussion, organised by the Council of Europe as part of an Internet Governance Forum open forum, brought together leading experts from policy, technology, and civil society sectors to examine the dual role of artificial intelligence in both creating and combating disinformation. The panel explored how AI has fundamentally transformed the information ecosystem, creating unprecedented challenges for democratic dialogue whilst simultaneously offering potential solutions for maintaining information integrity.
The discussion was structured around the Council of Europe’s three-pillar approach to addressing AI and disinformation: integrating fact-checking into AI systems, implementing human rights-by-design principles in platform development, and empowering users with knowledge and tools to navigate AI-mediated information environments.
## Opening Framework: Council of Europe’s Approach
Irena GrÃkova, the moderator from the Council of Europe, established the context by highlighting the organisation’s role as “Europe’s watchdog democracy and human rights watchdog” representing 46 member states. She introduced the Council’s three-pillar guidance note addressing AI and disinformation: fact-checking integration, platform design principles, and user empowerment strategies.
GrÃkova emphasised the global significance of the Council of Europe’s AI Framework Convention, the first legally binding international treaty addressing AI’s impact on human rights, democracy, and the rule of law. The convention has attracted international attention, with signatures from non-European countries including Japan, Switzerland, Ukraine, Montenegro, and Canada.
## The Transformation of Information Systems
David Caswell, a product developer and computational journalism expert, provided the foundational framework for understanding the current information crisis. He explained that society has undergone a fundamental transformation in its information ecosystem over the past 15 years, moving “from a one-to-many, or more accurately, a few-to-many shape, to a many-to-many shape.” This structural change represents the root cause of current disinformation challenges, with AI serving as the latest evolution in this transformation.
Caswell emphasised that AI has fundamentally altered information creation processes, transforming them from artisanal, human-driven activities to automated, scalable operations. This shift enables the generation of entire narratives across multiple platforms and extended timeframes, rather than just individual fake artefacts. He noted significant improvements in AI accuracy, citing leaderboard data showing hallucination rates dropping from “15% range to now I think the top models in the leaderboard are 0.7%.”
A concerning development highlighted during the presentation involves the potential for powerful actors to reshape foundational AI training data. The moderator read from Caswell’s slides about Elon Musk’s announcement that Grok would be used to “basically rebuild the archive on which they train the next version of Grok,” effectively enabling the rewriting of humanity’s historical record at the training data level.
## Empirical Evidence of AI’s Impact on Disinformation
Chine Labbé, Senior Vice President at NewsGuard, provided compelling empirical evidence of AI’s growing impact on disinformation campaigns. Her research revealed dramatic escalation in the sophistication and frequency of AI-generated false content, particularly regarding the Russia-Ukraine conflict. Deepfakes increased from one case in the first year to sixteen sophisticated examples in the third year, demonstrating both improved quality and increased deployment.
Labbé described how malicious actors now create entire networks of AI-generated fake news sites designed to appear as credible local news sources. NewsGuard has identified over 1,200 such sites globally, representing a new category of disinformation infrastructure operating at unprecedented scale. She provided a specific example of the Storm 1516 campaign, which created a fabricated video involving Brigitte Macron, and mentioned John Mark Dugan, “a former deputy Florida sheriff, who is now exiled in Moscow” who has created “273 websites.”
Critical research findings revealed that AI chatbots repeat false claims approximately 26% of the time overall, with specific testing of Russian disinformation showing rates of 33% initially, dropping to 20% two months later. A BBC experiment found that “10% of the cases, there were significant problems with the responses. In 19% of the cases, the chatbot introduced factual errors, and in 13% of the cases, there were quotes that were never in the original articles.”
Labbé identified “vicious cycles of disinformation,” where AI-generated false content becomes validated by other AI systems, creating self-reinforcing loops of synthetic credibility. Malicious actors exploit this through “LLM grooming” – saturating web results with propaganda so that chatbots will cite and repeat it as factual information.
## Policy and Regulatory Responses
Maria Nordström, representing Sweden’s national AI policy efforts, discussed the importance of developing comprehensive regulatory frameworks whilst acknowledging the limitations of purely legislative approaches. She highlighted a critical policy challenge: finding the right balance between educating the public about AI risks without further eroding trust in information systems.
Nordström posed a fundamental question: “To what extent is it beneficial for society when all information is questioned? What does it do with democracy and our agency when we can no longer trust the information that we see, that we read, that we hear?”
The Swedish approach recognises that hard law regulation has limitations in requiring “truth” from AI systems, making consumer empowerment and choice crucial components of any comprehensive strategy. This perspective emphasises market-driven solutions alongside regulatory frameworks, empowering users to make informed decisions about AI systems.
## Practical Experience from Ukraine
Olha Petriv, an artificial intelligence lawyer from Ukraine, provided insights from a country experiencing active disinformation warfare. She described Ukraine’s bottom-up approach to AI governance, including industry self-regulation through codes of conduct developed whilst awaiting formal legislation.
Petriv emphasised the particular vulnerability of children to AI-generated disinformation, sharing examples of Ukrainian refugee children’s faces being weaponised in deepfake campaigns. She argued for early AI literacy education, stating: “It’s not just a parental issue, it’s a generation’s lost potential… if we will not teach children how to understand news and understand AI, somebody else will teach them how to think.”
Her approach to children’s AI education focuses on critical thinking and algorithm understanding rather than prohibiting AI use entirely, recognising that children will inevitably encounter AI systems and must be equipped with the skills to use them responsibly.
## Educational Solutions and Audience Engagement
The discussion included significant audience participation, with contributions from Mikko Salo representing Faktabarida, Finland’s digital information literacy service, Frances from YouthDIG, and Jun Baek from Youth of Privacy. These contributions highlighted practical implementation challenges for AI literacy programmes and the need for specific guidelines and support materials for teachers.
The discussion revealed innovative approaches to AI education, including using AI chatbots themselves to teach children about AI literacy. However, questions remained about the optimal age for introducing AI concepts, with debate about whether children as young as 10 could effectively understand these concepts or whether secondary school age was more appropriate.
## Opportunities and Positive Applications
Despite significant challenges, speakers identified substantial opportunities for AI to enhance information systems when properly implemented. Caswell argued that AI can enable systematic rather than selective journalism coverage, processing vast amounts of digitally accessible information impossible for human journalists to handle comprehensively.
AI offers potential to make civic information more accessible across different literacy levels, languages, and format preferences. This democratising aspect could help bridge information gaps that currently exclude certain populations from full participation in democratic dialogue.
Labbé suggested that AI tools could assist in monitoring disinformation and deploying fact-checks at scale, provided humans remain in the loop to ensure accuracy and context. She noted that platforms have been “disengaging from that commitment” to fact-checking, making AI-assisted solutions potentially more important.
## Market-Driven Solutions and Consumer Empowerment
A significant theme involved the potential for consumer awareness and market pressure to drive improvements in AI system reliability. Labbé noted that AI companies currently prioritise new features over safety and accuracy, but argued that user pressure could shift this balance toward reliability.
The speakers discussed the need for certification and labelling systems for credible information sources and AI systems, helping users identify trustworthy content in AI-mediated environments. However, this approach requires raising public awareness about the scale of misinformation problems in current AI systems, as many users remain unaware of the frequency with which AI chatbots repeat false information.
## Key Challenges and Debates
The discussion revealed both areas of agreement and ongoing debates. While speakers agreed that AI has fundamentally transformed information creation at unprecedented scale and that education is crucial for building resilience, disagreements emerged regarding the effectiveness of current fact-checking approaches.
Caswell argued that previous anti-disinformation efforts have been largely ineffective due to scale issues and perceived bias, whilst others defended the work of fact-checkers. There was also debate about the extent to which AI can replace original source journalism, particularly in areas requiring human presence such as war journalism and personal storytelling.
## Recommendations and Future Directions
The panel concluded with concrete recommendations including developing AI literacy educational materials and programmes, creating certification and labelling systems for credible information sources, and preserving primary source journalism as the foundation for AI-based secondary journalism.
The discussion emphasised implementing the Council of Europe’s three-pillar approach comprehensively, addressing both technical and social aspects of the challenge through coordinated efforts across regulatory frameworks, educational programmes, market mechanisms, and technological solutions.
## Conclusion: Transforming AI from Weapon to Force for Good
The discussion concluded with GrÃkova’s call to transform AI “from a weapon to a force for good” in the information ecosystem. This transformation requires coordinated efforts across multiple domains: regulatory frameworks that protect human rights whilst enabling innovation, educational programmes that build critical thinking skills, market mechanisms that reward truthfulness and accuracy, and technological solutions that preserve human agency in information consumption.
The speakers demonstrated that whilst AI poses unprecedented challenges to democratic dialogue, it also offers significant opportunities for improving information systems. The key lies in developing comprehensive approaches that address both technical and social dimensions of the challenge, ensuring that democratic institutions can adapt to and thrive in an AI-mediated information environment.
Session transcript
Irena GrÃkova: Good afternoon everyone. Welcome to the IGF open forum on AI and disinformation countering threats to democratic dialogue organized by the Council of Europe. My name is Irena GrÃkova, I’m head of the Democratic Institutions and Freedoms Department at the Council of Europe and I will be moderating this panel. I’d like immediately to thank my colleagues Giulia Lucchese sitting just here and Evangelia Vasalou who is online for helping to put together this panel and will be also producing the report and helping with the moderation. In this session we will be delving in one of the most pressing challenges facing democratic societies, in fact all societies today, probably not just democratic societies but we are, I’m personally concerned about democratic societies in the first place. The use of artificial intelligence in generating and spreading disinformation but we will also hopefully discuss the role of AI can play in actually curbing and limiting the spread of disinformation. Combating disinformation is a top priority for the Council of Europe as a human rights organization. For those of you who may not be familiar with the Council of Europe, especially those coming from other continents, the Council of Europe is what I call the older and larger brother of the European Union, an organization of 46 member states with a particular focus on human rights, democracy and the rule of law. We call Europe’s watchdog democracy and human rights watchdog and as such of course we are extremely concerned with phenomenon of disinformation and all the other threats to democracy today. The Council of Europe is also always on the forefront of how technological development impacts our societies and the rights-based ecosystem that we have. created and this is why the Council of Europe prepared and opened for signature and ratification last year the first international treaty on AI and its impact on democracy, human rights and the rule of law. And we are now in the process of developing sector specific policy guidelines and also supporting member states in implementing specific standards in different areas including in the field of freedom of expression. The Council of Europe has also been at the forefront of analysing the impact of AI generated disinformation and its role for resilient rights based pluralistic and open information ecosystem. In particular last year we issued a guidance note on countering online mis and disinformation which is uploaded as a background to this session also in the chat but there are a few copies here in the room in case you still want an analogue copy of the disinformation of the guidelines. And this note offers practical guidance really very very specific and detailed pointers of how states and other partners in this democratic system that we are trying to protect, digital platforms, editorial media and other stakeholders can fight disinformation in a rights compliant manner. Now it’s a soft instrument, this is not a binding treaty but it does include the collective wisdom of all of our member states and a large number of experts, some of them sitting around here. And therefore it’s really interesting and useful and it’s organised around three pillars, the main things that were suggested here and mind you in the process, at the time when it was actually developed and written AI was yet not that prevalent and prominent so it’s not so much AI. The pillars are fact-checking, calling for independent, transparency and financial sustainability by both states and digital platforms. Especially, platforms are urged to integrate fact-checking into their content systems. Unfortunately, we’ve seen in the past few months that the platforms have been disengaging from that commitment, and that’s an issue in itself. Platform design is the other pillar of that disinformation strategic approach. The Guidance Note advocates for human rights by design and safety by design principles. These are key words that we’ve been hearing a lot during this IGF and also in the previous editions. These are the basic principles of an information society, the way we’d like to see it. With emphasis on human rights impact assessments and process-focused content moderation, in order to favor nuanced approaches to content moderation and content ranking, preferable to blunt takedown approaches. Perhaps today we will explore how AI can help us achieve such a nuanced content moderation approach. The third pillar of that Guidance Note is user empowerment. I particularly see that user empowerment is becoming more and more prominent as a tool, as a strategy, as a dimension of fighting harmful content, including disinformation. And that includes all kinds of initiatives at local level, community-based, and also collective. We are working on the role of AI in creating a resilient and fact-based information ecosystem. We are working particularly on applied documents that will be even more specific and more practice-oriented. And we support implementation by states through projects, through field projects that we actually work, in which we work directly with our member states. Just to introduce the panel, and I’ll give the floor to our first panelist in a minute, our thinking at the moment, and these are the areas that we really are looking into in some more detail as policy strategies, how to reinforce public service media. Public service media has always been the cornerstone of a truth-based and authentic and quality information system, and they are threatened. We need to find ways of strengthening public service media, but also to enhance the capabilities of the regulatory authorities, their mandates, their independence, to navigate the rapidly evolving digital environment. Another line of thought is how to demonetize the disinformation economy, to cut off the financial incentives that help to amplify disinformation content. And then, indeed, another topic that’s been very much on the surface here in Norway, how to enforce regulation, co-regulation, and how to strengthen regulation. There are dissenting voices against regulation. Obviously, we hear them as well, but for the Council of Europe moving towards stronger regulation for platform design. to ensure transparency and public oversight in content moderation and curation is a must. And finally, investing in the resilience of users. We are talking a lot, there’s a lot of debate and publications and research about the supply side of disinformation. How do we ensure that the content produced and visible there is less disinforming or less harmful? But then what about the demand side? And I’m putting this in inverted commas because the demand is not necessarily explicit or willing, but you know, the use of information. What do we do about users and how do we make sure that they actually make use and demand and go for quality content even when it’s there? So our speakers, and in the first place, David Caswell, will tell us what is the state of the art now? How is now AI impacting disinformation and what can we do about it? Because the problem, it’s amplifying the problem clearly, but it’s maybe part of the solution. And we have an amazing panel here for you. Without further ado, I will introduce David, who is a product developer, consultant and researcher of computational and automated forms of journalism. And full disclosure, David is also an expert for the Council of Europe. And he’s a member of our expert committee for the guidance note on the implications of generative AI on freedom of expression, which is forthcoming. So keep an eye on the Council of Europe website. You will be informed about it by the end of the year. So David, please share your perspectives about AI and disinformation challenges and hopefully some solutions. Certainly.
David Caswell: Thank you for the introduction. Yeah, before I start, I just want to just… I’ll just make it clear that I’m not an expert on misinformation. My expertise is around AI in news and journalism and kind of civic information. So I focus kind of less on the social media side of things than I do on news. But I think a lot of this applies either way. Before kind of getting into what AI is doing in the space of disinformation, I want to just talk a little bit about what was happening before AI came along. So if you can imagine pre-ChatGPT, or even most of our ecosystem now. And the big thing that changed that kind of made this last 10 or 15 years of disinformation and misinformation activity, the thing that changed that made that necessary was this. We basically changed our information ecosystem from a one-to-many, or more accurately, a few-to-many shape, to a many-to-many shape. So we went from a situation where only a few entities, mostly organizations, could speak to a large audience, to this situation where anybody could speak to a large audience. And this was the internet, and then social media, of course. But this was a change in the distribution of content, the distribution of information. And this is the technical change that caused this cascade of activity over the last 15 years, including around disinformation and misinformation. I think, again, before AI, or generally before AI, it’s worth sort of looking at how the response to that era of kind of disinformation, how that went. And I would suggest that it hasn’t gone well. I would suggest that there aren’t many people on any side of any of the arguments that would suggest that it’s been very successful and kind of worked well. Just to kind of go through some of the things that I think… people kind of think about here or perceive. One is that it was generally ineffective. I think there’s an issue around scale here, that the scale of communication on the internet, on social media is such that things like fact checking and all that provide just a tiny, tiny drop in this vast ocean of information. I think there’s a perception that there’s a certain alarmism around it. There’s a lot of research that’s coming out on this with different ways of looking at it, but essentially the net of it is that the concern around these things seems to be restricted to a relatively small portion of the population. Most people think it’s kind of less of an issue. I think there’s a sense or a perception that there’s a certain self-interest around misinformation and disinformation activities, and that has to do with this overlap with journalism. And so there’s a sense that as journalism kind of was diminished and its power reduced in the internet era, that a lot of that activity kind of went over into the misinformation, disinformation space. On the political side, I think we’re pretty aware that there’s, on the left-right continuum, there’s a sense that the whole kind of disinformation, misinformation space has a left-coding bias to it. This is certainly what Mark Zuckerberg used as his justification for turning off the fact-checking activity at META. I think there’s another kind of politicization that’s going on here, which is more this elites versus populism politicization. That’s easy for me to say. The thing that’s happening here, and there’s a really good book by Hugo Mercier about this, is that in the elites versus populism dimension, misinformation and disinformation is used as a reason or an excuse or a narrative as to why populism is happening. It’s like misinformation and disinformation is causing this by kind of fooling half the population. So I think that’s that’s been an issue. I think there’s an issue around most of this being anecdotal it’s and not just anecdotal case by case by case, but anecdotal in terms of the artifacts focusing on individual artifacts, individual images, individual facts, individual documents, these kind of things, not on systems, not on sort of the processes and the systems that that do this. These are just kind of my my views, but I think it’s a general perception in a lot of parts of society that this sort of, you know, this attempt to put some order on the information environment has not been successful. So let’s turn to AI here and I think in this AI era, the thing that’s changed now, you know, with with chat GPT, it’s been building for a long time before that, but roughly since chat GPT the thing that’s changed now is the this transition from an artisanal form of creation of news and journalism to an automated form of news and journalism. And so this this is quite profound because news and journalism and the creation of knowledge generally was one of the last kind of handmade or artisanal activities in society. And with AI we now have tools that can at least partially automate that which is a new thing for the information ecosystem. And that includes the gathering of news-like information, the processing of it, and especially the creation of experiences of it for consumption. And that’s the I think the fundamental new thing here. In terms of, you know, what are these risks that AI poses that are new? You know, there’s a lot of them. I think an obvious one is the risk of accelerating this fragmentation of shared narratives. You know, this has been, you know, obviously, you know, a building a building issue for the last so since the internet came along basically, but it’s It’s important to keep in mind that this is not just a news or journalism kind of concern. This is happening in all knowledge-producing activities. It’s happening in scientific publishing and activity publishing. It’s happening in government intelligence services. It’s happening in enterprise knowledge management. The fundamental mechanisms behind this are really kind of broad. I think there’s a second risk that’s not perceived very widely about the ability with AI to extend disinformation beyond individual artifacts, like deepfakes or individual facts, to entire narratives that extend over many, many different documents and images and videos and media artifacts, and that extend over long periods of time, days or weeks or months or even years. An example of this in the manual space is this Hasbro program that the Israeli government have been running for many years, about 2,000 people that basically kind of operate on influencing narratives across the world, and with AI we may be entering into a world where that can be automated and also be made accessible to many, many more people and agents and governments and other actors. I think there’s a major risk that’s developing around automated and personalized persuasion at scale, so you could think of this as radicalization at scale or grooming at scale, to use the word that the Brits like to use. There’s been a couple of very, very interesting papers that have come out on this recently. One of them is actually not a paper because they had an ethical issue in the data collection and so it wasn’t an official paper, but generally it seems that these AI chatbots are already substantially more persuasive than humans. And so that’s their effectiveness. And then you kind of combine that with the fact that you can operate these individually in a personalized way across the whole population at some level. I think that’s a new and significant risk. And then I think there’s another deep risk that really is underappreciated here, which is that as we start to use these models as sort of core intelligence for our societies, that there are biases in these models. And we talk a lot about biases in AI models now, and that’s very true. There’s biases in the training data. There’s attempts to resolve this in things like system prompts or in the reinforcement learning from human feedback that kind of helps to train these models. But even more fundamental than that, we’re starting to see intentions to place deep biases
Irena GrÃkova: into the model. So one, which is this tweet here that showed up recently, is Elon Musk. And he basically built this large language model, Grok. Its intention was to be what he called a maximally truth-seeking large language model. And it was a little too truth-seeking for his taste. So he’s been getting into this Twitter argument or X argument with Grok over the last little while. You can sort of see this interchange happening where you’ll ask a question and he doesn’t like the answer and so on. And so he has just recently announced that they’re going to use Grok to basically rebuild the archive on which they train the next version of Grok. So they’re going to write a new history, basically, of humanity, and then use that to train Grok. That’s an example of building a large language model that’s broadly used with the biases deep in the training weights. That’s a very significant risk. Oops. So I think there’s a, you know, if you kind of go all the way down to maybe the foundations or the first principles here, there’s a new deep need or a new… The New Core Requirement in the Information Ecosystem, and this is articulated very, very well in this book, this recent Yerval Noah Harari book. And what he says in this book is, which I think it’s very well evidenced and argued there, is that we’ve depended for 400 years on these mechanisms, like the scientific method or like journalism or like government intelligence or like the courts. And these mechanisms have two characteristics, they’re truth-seeking and they’re also self-correcting. So they have internal biases that move them towards the truth. I think there’s actually a third requirement in there that maybe wasn’t as apparent until these large language models got going, and this is just a kind of a personal interpretation. I think that our methodology or our mechanisms for things like journalism and civic information, I think they also need to be deterministic rather than probabilistic. In other words, they need to be specifically referenceable and explainable and verifiable and persistent in an archive, all of the things that large language models are not. Just to kind of turn to the opportunities here, because the news is not all bad. The scale of opportunities, I think, is of the same order as the scale of the risks. There are some real opportunities. I think one opportunity is this possibility that we might have news or journalism or civic information or societally beneficial information that is systematic instead of selective. In other words, the scarcity issues around collecting and processing and presenting or communicating this information, those scarcity issues go away and we have a level of systematic transparency that is vastly greater than it is today. I think that’s a very real possibility, scaling the amount of civic information. in the ecosystem. I think another one is this new ability to make civic information and news and journalism accessible to many, many, many more people, regardless of literacy or language or style preference or format preference or situation or whatever, because now we can adapt this information to each individual. That’s a very significant new thing. And I think those two things together, the scale and the accessibility, means that we do really have, I think, this possibility, if we were to build towards that, of basically having relevant, accessible, societally beneficial information available to everybody at a much, much deeper degree of relevance and personal relevance than we’ve had before. And then also, finally, I think one of the challenges of information now is it feels very overwhelming. We have a news avoidance problem at scale. We all have a personal sense of being overwhelmed by information. I think AI helps us address that. I think the thing that we’re primarily being overwhelmed by is units of content, not information. And we have this new possibility here with AI to not just have dramatically more information, but also to feel more in control and more mastery of that information. So just to wrap up here, I’d like to suggest an ideal for what we might aim for as an opportunity in this AI-mediated information ecosystem that’s forming.
David Caswell: And I think it’s worth looking at this in terms of a continuum. And the continuum goes from, say, medieval ignorance, way down at one end, to godlike omnipotence or maybe a Star Trek universe level of awareness and omnipotence about your environment. And if we look at that kind of continuum as an ideal, we’ve made a lot of progress along that line. We’ve gone from a situation before the printing press and before literacy where really people didn’t know much about their world at all. apart from their immediate experience. And through these inventions and these cultural adaptions to those inventions, we’re at a point now where the amount of information we know about our world almost instantly, at our fingertips, is just staggering compared to what would have been available to an average citizen in, say, 1425. But I think there’s also no reason to think that we’ve stopped, that we’re at the optimum place on that continuum, that we’re at the place where the democratic dialogue is as good as it could ever be, or was recently as good as it could ever be. I think we’ve got a ways to go. I think the AI that exists now, diffused into our information ecosystem with the right governance and the right orientation and so on, that could move us a considerable way up that continuum. If we get something like AGI, maybe in five or 10 years, I think that could move us even further. And then at some future hypothetical point, maybe some kind of super intelligence moves us even further again. So I think there’s a lot of technical challenges here, governance challenges, safety and security challenges, of course. But I think as an ideal, trying to move to the right of that continuum is a good place to be. I’ll just leave that there.
Irena GrÃkova: Grant, thanks a lot, David. That was a lot of food for thought. Personally, I was quite struck by how the AI can create now a new plausible reality, just by the sheer scope and sophistication and scale of it. So how do you fact check your way out of a completely new alternative reality? Obviously, you can’t. And I really liked your idea that we are spiraling and going up and up and up much faster than we can actually conceptualize it, from information to content, and then hopefully to information again. But let’s see if we have some more practical tools that we can use to do that. And our next speaker is Chine Labbé. She’s a senior vice president and Managing Editor for Europe and Canada at NewsGuard. NewsGuard is a company that tackles disinformation online, and Chine will explain how they do it, what they do it, and what are the results.
Chine Labbé: Hi, thank you very much for having me. So I’ll start right away by explaining how AI is and has supercharged disinformation campaigns to this day. The first thing that we’re seeing is that malign actors are, and you all know that, increasingly using text, images, audio, and video generators to create content, deepfakes, images, et cetera. Just to give you one piece of data to illustrate that, take the Russia-Ukraine conflict. During the first year of the war, out of about 100 false claims that we debunked at NewsGuard, one was a deepfake of Zelensky, very pixelated, very bad quality. Now fast forward to the third year of the war, we had 16 deepfakes, super sophisticated, super believable, that we debunked. Now there’s still only 11% of the false claims that we debunked that year, but they increased quite astonishingly. And of course, the conflict that was more recent, Israel-Iran, has also shown lots of deepfakes being shared, images, short images circulating online. So this is just one example, a video that was shared as part of a Russian-influenced campaign called Storm 1516, shows a person whose identity was weaponized, it’s a real person, but modified with AI, to say that he has been sexually assaulted by Brigitte Macron, the wife of Emmanuel Macron, France’s president, when he was a student. He actually was a student of Brigitte Macron, but never said that, never was sexually assaulted. The video is very believable, all the more so that the person exists. If you Google him, you can see that he was a student of Brigitte. So all that, of course, has increased a lot. Now the second thing that we’ve seen in terms of how AI is supercharging this information is the use of AI tools to imitate credible local news sites. So basically create entire networks of websites that look just like a local news site that share maybe some reliable information and then some false claims. And that’s entirely generated with AI, maintained with AI, with no human supervision. The photo that I put here is a quite infamous Kremlin propagandist. He’s an American fellow, former deputy Florida sheriff, who is now exiled in Moscow. His name is John Mark Dugan, and he alone is behind more than 273 websites that he’s created using AI with an AI server to imitate first local news sites in the U.S. with names that really sounded like local news sites, and then in Germany ahead of important elections. So these AI content forums have grown exponentially over the past few years. We started monitoring them in 2023. May 2023, we had found 49 of them. Now fast forward to today, we have counted 1,271, but that’s probably the tip of the iceberg. Thank you, what we’re seeing and what we can actually confirm as being AI-generated. Why? Because it’s really cheap to create an AI-generated news site. We did the test, a colleague of mine, and that’s an essay that he wrote about the experience in the Wall Street Journal, paid $105 to a web developer based in Pakistan, and in just two days he had his self-running propaganda site. and many others. So this is a very simple, very simple propaganda machine, password protected, don’t be alerted, we didn’t want to put any further on the web, but it was just, it just took two days, $105. So now all of these sites, these over 1,000 sites that we found, AI content farms, they’re not all publishing false claims or only running false information, but they’re all, I would argue, they’re all a risk to democratic dialogue. Why? Because if you have a false claim, if you have a false decision, which is the case for all these websites, you have the risk of hallucinations, factual errors and misrepresentations of information. Now, that’s just one example. A recent one, the BBC conducted an experiment in December 2024, they asked 100 questions to four chatbots based on their information, so they gave the chatbots access to their website, which they found to be false. So the chatbots were able to read the questions, they were able to read the questions, and in 10% of the cases, there were significant problems with the responses. In 19% of the cases, the chatbot introduced factual errors, and in 13% of the cases, there were quotes that were never in the original articles, or that were modified by the chatbots. And just a recent example of such a problem, the BBC conducted a study in the United States, and they found that there were very existing authors, but very nonexisting books next to their names, along with some existing ones, so all the more troubling. Now, just imagine small errors like that, slowly but surely polluting the news that we consume, and I would argue that this is a very concrete threat to democratic dialogue. So, what we’re seeing is that, in particular, in the case of disinformation, is that AI chatbots are often repeating disinformation narratives as fact. this vicious circle where basically chatbots fail to recognize the fake sites that AI tools have contributed to creating and they will cite them and present authoritatively information that’s actually false. So you have information created by AI, generated, then repeated through those websites and validated by the AI chatbots, the really vicious circle of disinformation. Now in early 2023, the idea that AI chatbots could be misinformation super spreaders was hypothetical. We looked at it because it seemed possible, right? But today it’s a reality. Chatbots repeatedly fail to recognize false claims and yet users are turning more and more to them for fact-checking, to ask them questions about the news. So we saw it recently during the LA protests against deportation. This is just a very striking example. There was a photo of a pile of bricks that were circulating online being presented as a proof, as evidence that the protests were staged, were organized, orchestrated by someone that was putting bricks there to encourage the movement and that they were not organic. The photo was actually from New Jersey, so it was not in California. Then users online turned to Grok and asked Grok to verify the claim. And even here you have an example when even when a user was pushing back saying, no, Grok, you’re wrong, Grok would repeat the falsehood and insist that no, no, no, no, it is true. And in recent days, we saw the same thing with a false claim stating that China had sent military cargo planes to Iran that was based on actually misinterpreted flight-tracking data. And we’re doing that every month now. We’re auditing the main chatbots to see. How well they resist to repeating false claims and what we’re seeing month-to-month is that in about 26% of the time they repeat false claim Authoritatively, this is just one example asking Mistral about a false claims pushed by Russian disinformation sites And Mistral is not only saying that it is true, but is also citing known disinformation websites as its sources Now the problem is not just an English language problem It’s a problem in every languages. We did before the AI action summit in Paris earlier this year We did a test in seven languages and we proved that it was a problem in all languages and especially prevalent in languages where There’s less diversity of the press. So where the the language is dominated by state-sponsored narratives Now if I just told you that we had put a drug in the open To sell in the commercial world where 26 of the pills out of a hundred are poisoned Would you find it’s okay, and that’s the question that we have to ask ourselves when talking about information and AI and the last thing I want to Raise here is that this vulnerability that we are still failing at putting guardrails against is well Identified by malign actors so they know that by pushing Russian narratives, for example in the case of Russian actors They can change AI and influence the result to chatbots so this is a process that’s been well identified by a network called the Pravda Network a network of about 140 sites in more than 40 languages. That’s basically a laundering machine for Kremlin propaganda publishing more than 3 million pieces of content a year and With no no audience the websites have very little followers very little traffic But their goal is just to saturate the web results so that chatbots will use their content and we did An audit and we found that in 33% of the time the chatbots would repeat the disinformation from We did again the test in May, two months later, and it had gone down to 20%. We don’t know what mitigation measures were put in place, but pretty much the same. And I’ll just end here, because I’m running out of time, to say that to conclude, basically with generic TVI, disinformers can now fund less for more impact. So as David said, it’s just the scale that is changing dramatically, the automation. And now they can also influence information that is given through AI chatbots, through this process of LLM grooming. And I’ll just end on a positive note, yes, AI can help us fight disinformation. So we’re using AI for monitoring, for deploying fact checks. We can even use generic TVI, as David said, to create, for example, new formats of presenting content, as long as the human is in the loop. But it’s hard to foresee a world in which we’ll be able to label all false, synthetic disinformation. So that’s why I think a very important factor today is also to label and certify credible information, and allow users to that way identify credible information. That’s what we do at NewsGuard. There’s also the Journalism Trust Initiative. Trust My Content is another example, and I think that’s a very positive way forward.
Irena GrÃkova: Thank you, Shin. I think we’ve entered the era of mistrust. We seemingly cannot trust anymore anyone, not even news sites, which may be fake. And I think this is dangerous, not just for the democratic dialogue, but democracy itself. Because when there is such a widespread distrust within society, between individuals, that the democratic institutions collapse, because democracy is ultimately built on trust. Maybe we need public service AI chatbots that are trained on only reliable data. Because unfortunately, even efforts to try and defend legitimate media, editorial media Maria Nordstrom. Maria is PhD Head of Section Digital Government Division at the Ministry of Finance in Sweden. Maria works on national AI policy at the Government Office of Sweden, as well as with international AI policy. In particular, she was participating in the negotiations of the EU AI Act and the Council of Europe’s Framework Convention on Artificial Intelligence. Please, Maria, the floor is yours.
Maria Nordstrom: Thank you, and thank you for having me. So I’ve had the pleasure and the privilege to currently be on the Bureau of the Committee on AI at the Council of Europe, and the main task of the committee was to negotiate the Framework Convention on AI, already mentioned, which was adopted and opened for signatures last year. And it’s the first, as mentioned, legally binding global treaty on AI. It is global because it’s open for signatures, not just to the members of the Council of Europe, but it can be signed by other states as well, and since it’s opening for signature, it’s also been signed by Japan, Switzerland, Ukraine, Montenegro and Canada. So it’s, in its essence, a global treaty on AI, human rights, democracy and the rule of law, and it formulates fundamental principles and rules which safeguard human rights, democracy and the rule of law throughout the AI life cycle, while at the same time being conducive to progress and technological innovation. So, as we’ve heard and as I think we know, AI has the potential to enhance democratic values and improve the integrity of information, but at the same time, valid concerns have been raised. The integrity of democracy and its processes rests on two fundamental assumptions, that individuals possess both agency, so the capacity to form an opinion and act on it, as well as influence, so the capacity to affect decisions made on their behalf. AI has the potential to either and both strengthen as well as to undermine these capacities. So David mentioned AI-driven persuasion at scale, which is an excellent example of how these capacities can be very efficiently undermined. So it’s not very surprising that one of the core obligations under the Convention, the Council of Europe’s AI Convention, is for parties to adopt or uphold measures to protect individuals’ ability to freely form opinions. So these measures can include measures to protect against malicious foreign inference, as well as efforts to counter the spread of disinformation. And as we’ve heard, AI can serve both as a tool for spreading, efficiently, disinformation and thereby fragmenting the public sphere, as well as to combat disinformation. This has to be done with some thought put into it. So it’s essential to implement appropriate safeguards and to ensure that AI does not negatively impact democratic processes. So currently the Committee on AI at the Council of Europe is developing a risk and impact assessment methodology, which can be used by developers and other stakeholders to guide responsible AI development. It’s a multi-stakeholder process with the civil society, the technical society being involved and there’s still time to contribute to this process if you wish and it’s a great example of how we can go from a convention to developing a tool which hopefully will have practical value and can be used by practitioners and policymakers to assess risks with AI systems to democratic processes and to democratic dialogue. Another safeguard that we in Sweden believe is very important is AI literacy. So AI literacy, both understanding what AI is but also understanding how it can affect this information, is crucial in addressing the challenges posed by the rapid advancement of AI technologies. So the Swedish government has tasked the Swedish Agency for Media with creating educational materials to enhance the public’s understanding of AI, particularly in relation to disinformation and misinformation. So they will develop a web-based educational tool which will be released later this year. However, one of the things that we are thinking about from a policymaker’s perspective and the key challenge is to find the right balance between providing sufficient information without further eroding public trust in the information ecosystem. So the important question here is, to what extent is it beneficial for the society when all information is questioned? What does it do with democracy and what does it do with our agency when we can no longer trust the information that we see, that we read, that we hear? So finding that right balance and informing about the risks while not eroding the public’s trust is, I think, a key challenge. and something I’d love to talk more about. Thank you.
Irena GrÃkova: Thank you very much, Maria. Indeed, the AI Treaty of the Council of Europe is really important. It does need to be ratified though, so I encourage everyone here who has any say or any way of doing advocacy for the signature ratification of this treaty. Do not hesitate to come back to us, to Maria, to our colleagues, to myself, to find out more about it. Our final speaker is Olha Petrov. Olha is an artificial intelligence lawyer at the Centre for Democracy and the Rule of Law in Ukraine. She’s an expert on artificial intelligence and played an active role in the discussion and amending the negotiations of the Framework Convention on Artificial Intelligence of the Council of Europe. Olha, you, coming from Ukraine, clearly are at first hand observing the challenges that artificial intelligence poses for society amidst the war of aggression. And you also have some ideas about tools that can actually help curb that phenomenon. Can you tell us about it?
Olha Petriv: Yes, thank you. And I want to start that in Ukraine, disinformation, it’s not just a problem, it’s something that we face and solve every day. And we have some steps that we already did to fight with this. And Ukraine, our Ministry of Digital Transformation in Ukraine, we already have created AI strategy and also a roadmap on AI that has a bottom-up approach that help us right now to do our first step with companies and with our civil society to fight also against disinformation during the war. And what it It means that we don’t wait for the law to be passed and we have two parts, where first part consists of recommendation for society, for business, for developers and also other part is methodology of Huderia, self-regulation and other main steps that help us not to wait for the law. I want to share more information about self-regulation, like first step before this law, it’s like a process when companies come together to create their own code of conduct and to solve the problem when also companies, they don’t use AI in ethical way, and that is why six months ago our companies, it was like 14 companies, Ukrainian, Grammarly, SoftServe and other big companies that were created in Ukraine and work worldwide, they created code of conduct that consists of eight main principles that they implement in their business. Also after this they created Memorandum of Commitment, according to which they created self-regulation body and members of this self-regulation body and also people, not people, like businesses that signed code of conduct, they will report once a year about implementing these guidelines that are And as a result Ukrainian society and other countries that can see how Ukrainian business is working with ethical AI We can just check this and understand how it’s implemented So it was and it is right now our first step because we don’t want to wait when this law will be in Ukraine right now or later We know that if we will implement this ethical AI in our AI systems right now, for example, principles that are connected with transparency With risk-oriented approach and other principles that we have right now in code of conduct And other companies that already want to be part of this process As a result, we will show the world and show inside our country that we are innovative in using AI ethically And also after this and during this process to fight against disinformation What we are doing all the time because of a lot of campaigns that we have because of war And other important side of all of this what I want to discuss today Like tell today is part of children and AI and disinformation Because children are using AI a lot too And disinformation is that part that they can spread and they can be also victims of disinformation that are created by them We already have this situation in Ukraine, its campaign name of which is Matryoshka. And during this campaign, a face of Ukrainian refugee girl was used to spread information that she doesn’t like different schools in the U.S. and as a result to make Ukrainian refugees in other countries in the type that they have distrust. So it’s one type how disinformation is used against Ukrainian children also when they are part of this disinformational process but they aren’t ready for this and it was just using of their faces and creating deepfakes by Russia. Also right now disinformational bullying is spreading more and more in schools, not only in politics and what we can do with this, we are working a lot with using AI in educational process with children and also for example UNESCO has created a lot of programs connected with disinformational process in Ukraine where we spread to children how to make their critical thinking better because especially when you are living in a country when every day you face a lot of big amount of news that are faked, you have to make your critical thinking better. So also we now… are working on this, that we have not to ban AI from, like, for children, we have to give them knowledge how to help using AI better, and to know how AI works. And right now, like, strategies against AI, it’s, like, against AI disinformation, it’s the main, our main strategic response to AI disinformation, and also we have to make the children will understand how to resist to different fakes that we have. And when we are talking about this, for example, at my start-up, we are working on that we help children to develop the critical thinking through, not through lectures, or not through different moral lessons, but through AI companion, that it’s easy to understand for children, and as a result, it’s better for the education, because we know that if we will not teach children how to understand news and understand AI, somebody else will teach them how to think. And it’s not just a parental issue, it’s a generation’s lost potential. Thank you.
Irena GrÃkova: Thank you very much, Olga. Well, that was plenty of information and insights. Now it’s time, we’ll have about 15 minutes for a discussion with you, the audience, both on-site and online. If you would like to ask a question or contribute with your thoughts, please use the two microphones on the sides of the room, introduce yourself, and go for it, and my colleague Julia will be checking. the Zoom site for any speakers online. Yes, please.
Mikko Salo: Thank you. My name is Mikko Salo. I’m representing Faktabarida, a digital information literacy service in Finland. And we’ve been in, let’s say, 12 years in the crazy world of information disorders. And with a small actor, we try to focus where it really matters. And I very much subscribe to the education field. OK, we might be spoiled, perhaps in Finland, but still being able to trust and investing in teachers for a long time. But that’s what you said on the line. And in Ukraine, you know what you’re talking about because you are in the face of it. But something that I’d like to specify that at what stage and how do you teach about the AI? Because, of course, in the big picture, and this is what we learned when we were lecturing in the US, people tend to take the AI as granted. And I mean, what is the right age? Because you have to first think yourself in order to use the AI as a supportive tool, like you describe. And that’s, I think, it’s a very culturally bound thing. Then a small note to Mr. Campbell, you have a very good presentation, but we are working a lot with the fact checkers as well. And just a little bit ringing in my eyes that if you use like a meta Zuckerberg to say about the left-leaning fact checking, because what happened was that Zuckerberg completely revised his opinion on fact checking because of the political tides in the US. I think it’s for everybody to judge what the fact checkers have proposed. It’s definitely not enough. But I think the fact checkers have written a very open letter about explaining the case. So that needs to be corrected for the files. Swedish colleagues as well, so we did in Finland. First, what we introduced is that as a small NGO, we kind of pushed the government to do the guidelines for the teachers, because outside that, this kind of guidelines, the teachers are lost. It’s such a big thing. And now when they have the guidelines, then we made a kind of guide on AI for teachers. It’s actually been translated within the OneEU project in English as well. Culture specific, but still that they need guidance on that one. I think that’s the shortest way and the lowest hanging fruit to multiply towards the next generations, because if you become AI native, so to say, without being literate on that one, that’s very scary. Thank you.
Irena GrÃkova: Thank you for your contribution. Let’s take the other two questions and then revert to the panel.
Audience: Hi, my name is Frances. I’m from the YouthDIG, the European Youth IGF. And I just had a question mainly for David Caswell about basically people’s preferences with journalism, because at least my intuition is that there needs to be an original source when it comes to journalism. So when we say that the gathering of news and information can be equally or just about as good done by AI, is that necessarily true when there needs to be an original source? Maybe if you think about war journalism or people who go into conflicts or people who go to humanitarian crises, or even just like lots of us like to consume news about personal anecdotes or personal stories. So then can AI really replace that? And then the second thing is I think people also like to read the same content as other people because it unifies us in a way, which is why you have echo chambers. Yes, I read The Economist and you read The Daily Mail. And so we’re different in that respect. But why would that necessarily disappear? Like the whole screen for one or the media for one is not necessarily that attractive because it means that then… and Aya Solares. We’re all consumers of different news. We can’t relate to people because we’re all consuming different news. The last characteristic, I think it’s also possibly true, but please correct me if I’m wrong, is surely if online systems and AI is generating all of our news, that’s super-fast. That’s maybe too fast for us to keep up with. So you have a comparative advantage if you’re a news site that is the original source, one, and you only have a few articles, one, and you only have a few articles, one, and you only have a few articles, one, and you only have a few articles, one, and you only have a few articles, one, and you only have a few articles, one, and you only have a few articles. So I would just like to know how, yeah, if you disagree or agree with those characterizations. Thank you.
Irena GrÃkova: Thank you. Yes, please.
Audience: Hello, my name is Jun Baek from Youth of Privacy. We are a youth-led privacy and cybersecurity education organization. One of the lessons that we learned from the last two years is that when you have a lot of attempts at education, it is a very hard battle to be won because of the scale of the problem. And I was wondering if there might be some other ways of incentivizing a lot of AI services providers to be more grounded on truth and reality, and what are some ways that we could try to incentivize them to do so? Thank you.
Irena GrÃkova: Thank you so much. Actually, I had a very similar question for Ulha. How do you actually motivate AI companies to go for the self-regulation comply? I’ll abuse my role as moderator to also ask David a question, because you were saying that we need news that is systemic and not selective, and I don’t understand, actually, what you mean by that. So let’s address all of these questions now. Who wants to start?
Olha Petriv: Okay. I want to say that this question and answer is more complex, because when we are talking about When children are using KI, of course, in this moment, we want to give them a safe environment. And what can we do as people that are connected to policy process and that can work with ministry, for example, we should ask more and more companies that share services that children can use to do this in ethical way. And other side of this, of course, we have to work with parents and with teachers. That is why I thought that it’s not just a parental issue that we have this gap when AI already is here. It means that we should use even a result of work that UNESCO did and they wrote in 2023 that AI skills is most so important for children, beginning from the algorithm understanding and other important steps that children have to understand when they are at school too. So what about age? It’s also important that we, like people already use this in super young age for their children. And it’s important to give children understand that AI can be like a tool that helps them to find answers to their questions and to help them to ask more and more questions. Because even if we will say about this AI leap in Estonia, that it will be in their educational program and also If we will tell about Mriya in Ukraine, we also integrate AI in educational system when teachers use this. We understand that, for example, in AI-lib, we have lessons like parts that is responsible for critical thinking. And it’s the main part of this. And yes, it’s my answer.
Irena GrÃkova: David?
David Caswell: And before I answer the questions, I’d just like to make a comment. I really think this idea of an AI chatbot to teach children AI literacy is an absolutely brilliant idea. And I’m going to have to noodle on that one. I think it’s really, really smart. So I’ll just go through these questions. And please correct me if I’ve kind of got anything wrong here. On the left coding of disinformation, misinformation concerns, yeah, the Mark Zuckerberg situation, him specifically, is one thing. But I think the reason he’s up there doing that, the reason he’s making these decisions, is because essentially half the electorate in the UK kind of feels broadly the same way. And I think the point I was trying to make there, didn’t have a lot of time to do it, is that I think it’s a tragedy that we have a partisan slant on this idea of an untruthful information environment. I think if there’s ever a thing we should all agree on, regardless of which side of the political spectrum we’re on, it should be a basis in fact and accuracy. So that was the point I was kind of trying to make with that one. I think on the fact checking specifically, there’s also a massive, massive scale issue where no matter how much you spend on fact checking, you just can’t keep up with the sort of volume of facts that need to be checked, basically. On the original sources thing, I think with AI, AI can be applied to only some kinds of journalism. And this is journalism that is essentially where the source material, the raw source material is. It’s digitally accessible. It has to be accessible to the AI. But I’d make a few comments there. One is that that’s already a significant and maybe even a majority chunk of journalism. If you kind of watch what actual journalists do in newsrooms these days, a lot of it is sitting at a computer, less and less out there in the field doing things. People always bring up the war journalism stuff and everything. That’s very important. AI is not going to do that in my lifetime. But that’s a very, very small part of journalism. And also, you know, the abilities of these systems to do other kinds of journalism, for example, interviews, AI is interviewing people, you know, these kind of things, you know, email exchanges, all of this kind of stuff. That’s a very real thing as well. It’s already starting. On the use of publications as part of identity, you know, this idea of a shared, you know, I read The Economist and I’m with people who read The Economist and somebody else reads The Telegraph and so on. I think that actually gets to one of the likely strategies that these kind of publications will take in the face of AI, which is to move up the value chain and focus more on identity. And that’s probably going to be quite successful. The problem with that, though, is it’ll be quite successful for The Economist and Telegraphs of the world because these are subscription-based, very narrowly focused identity-based publications. But if you take news publications and move them into sort of a high-value luxury good kind of category, you’re leaving out a lot of the population. So I think that’s a challenge there. The speed thing, I don’t know. I mean, I think, you know, there’s different ways to adapt to the news cycle or what even is a news cycle that’s always on. I think one of the things that AI can do is just basically create the experience that you want. So if you want a daily update, you know, whatever style or form of interaction or experience you want, and that’s one of the advantages there. The other question about… I don’t know if there’s another one for me there, but on your question about systematic versus selective, the opportunity there is, if you look at a domain of knowledge, say the auto industry in Bavaria, for example, some specific area of news. Right now, there’s a lot of journalists that cover that, but what they do is they don’t cover everything that’s happening because they can’t. There’s just not enough of them. They don’t have enough time. So they select, this is called newsworthiness decisions. They find stories, specific things in that industry, in that geography to report about. Whereas with AI, for some significant portions of that domain, that news domain, every single PDF can be read and analyzed. Every single social media post by every single executive can be analyzed. The automated systems can systematically cover all of it. Again, only the stuff that’s digitally accessible, but they can do it systematically. Whereas journalists have to pick and choose because they’re in a world of scarce resources.
Irena GrÃkova: Yes.
Chine Labbé: Maybe, yeah, just to address the question about AI versus on-the-ground journalism, I think that’s an opportunity for on-the-ground journalism. I think having worked in newsroom most of my life before joining NewsGuard and having a hybrid role, we often just didn’t have time or money to go on the ground and report. On the ground doesn’t mean going to a war zone necessarily, but just going across the street and interviewing people. I think with AI, it’ll allow journalism to go back to its roots and do more on-the-ground journalism. It’s an opportunity. Then the one question I wanted to address is the question about how to incentivize AI providers to base their systems more on the truth. I think the first step here is to raise awareness because a lot of people don’t realize. I think once users realize the scale of the issue, so the more you have tests like the BBC. The more you have audits like ours that show repeatedly that chatbots authoritatively share false claims and can’t help you with the facts, people will ask the platforms to do better. And at the end of the day, it’s a business. So if the users ask for more truth, then they’ll have to put in the guardrails. The problem today is that AI chatbots are not meant to provide you accurate information. That’s not what they’re meant to do. But that’s how people are more and more using them for. So as people increase their use towards that end, we have to raise the awareness within the consumers so that they ask for more reliability. The problem that we’re seeing now in our audits is that the chatbots tend to do worse in month that they release new features. What does that mean? That means that the industry is focusing on efficiency, on new sexy features, but not on safety. And so when you have new features, usually safety takes the back seat when it comes to news. So I think it has to come from the users asking for it.
David Caswell: Sorry, if I could build on that one second. On hallucinations specifically, there are other kinds of errors in AI output other than hallucinations. But on hallucinations specifically, there’s a website. You can go to the AI leaderboard that measures the hallucination rate of different models. And although you have retrenchments in hallucination rates like the 03 model that was just released from OpenAI, you can see the march over time of these models going from hallucination rates around the 15% range to now I think the top models in the leaderboard are 0.7%. That’s an indication of progress. I think it’s a lot less spectacular than it looks. And there are other sources of error in AI output beyond hallucinations. Omission is a big one. But we are in a transition phase here with these tools, and they will get better.
Irena GrÃkova: And I just wanted to add, I know two minutes, so I will give the speakers last word for their concluding, if you want to say something.
Maria Nordstrom: Yeah, I can, I can, these can be my concluding words, I guess, because I just fully agree that these are consumer products and we can empower the consumers. But at the same time, we are limited to hard law regulation and soft law measures. When it comes to hard law, yeah, we have the AI Act in the EU, for example, but it’s hard to, by hard law, by regulation, require the truth. I think that’s quite difficult to achieve through that particular measure. So when it comes to incentives, I think it’s very true that we can empower the consumers and probably help and lower the bar for consumers to understand and compare these products, because ultimately there are various AI systems out there that you can use and we can help consumers make a conscious choice about which systems they are using.
Irena GrÃkova: Exactly.
Chine Labbé: I think the one thing I’d like to conclude with is that malign actors are betting on two things. They’re betting on one that will use AI chatbots more and more for information. Now, I think according to the latest digital news report, it’s 7% of people in the world that say that they use AI every week for the news, but it’s 15% if you take just the under 25, and it’s going to grow spectacularly, and they’re betting that we’re not going to put the guardrails. So I think we have to focus on that, realise that, yes, we’re going to use AI more and more for that, and put in the guardrails. Thank you. Less than one minute, so if you have
Irena GrÃkova: three seconds, conclusion or…
David Caswell: Yeah, I’d just like to emphasise that it’s probably worth paying attention to the… The difficulties of the last 10 years of misinformation, disinformation response and not applying those or not sort of continuing those necessarily into the AI era. And I think particularly what that means is a more systems or strategic level focus. And the necessity for that is an ideal. It’s a view of what we want our information ecosystem to look like. And we have to have that conversation first, because then we know what we’re steering towards.
Irena GrÃkova: Okay.
Olha Petriv: And I want to conclude that it’s important to remember the target audience of disinformation and propaganda sometimes and like all the time are not only people that are right now voters, but that people that will vote in the next years. And it’s important to remember when we are thinking about disinformation and campaigns that we have right now.
Irena GrÃkova: Thank you very much. A couple of actionable highlights or food for thought, because we need to conclude on an action note. We need to preserve, first of all, for me, very important highlight. We need to preserve primary source journalism. And this is something we at the Council of Europe actually have started talking about to create a solid basis for AI based secondary journalism, because without it, it will turn into really entirely virtual world. We need AI and information literacy, including using chatbots to teach children AI literacy. It’s a good idea, but there are many other initiatives out there. Perhaps we need also certification for AI bots, because it’s true that you organizations like NewsGuard do monitor and alert, but then who knows, I mean, how many people actually are aware. So maybe we need some kind of a point system, like star system, like the users ranking, so that we know how trustable a particular bot is, or even maybe public service bots for trustworthy information. But there can be more ideas. AI is in its infancy and our understanding of it is even more in the beginning. So let’s hope we will together be able to turn AI from a weapon to a force for good. Thank you very much to the panelists, technicians, participants and everyone else. Thank you very much. Thank you very much. Thank you very much.
David Caswell
Speech speed
166 words per minute
Speech length
2739 words
Speech time
988 seconds
AI has transformed information creation from artisanal to automated processes, fundamentally changing the information ecosystem
Explanation
Caswell argues that AI represents a transition from handmade, artisanal news creation to automated processes. This is profound because news and journalism were among the last handmade activities in society, and AI can now partially automate gathering, processing, and creating news experiences.
Evidence
He notes that news and journalism creation was ‘one of the last kind of handmade or artisanal activities in society’ and that AI can now automate ‘the gathering of news-like information, the processing of it, and especially the creation of experiences of it for consumption’
Major discussion point
AI’s Impact on Disinformation Creation and Spread
Topics
Human rights | Sociocultural
Agreed with
– Chine Labbé
Agreed on
AI has fundamentally transformed information creation and distribution at unprecedented scale
Previous anti-disinformation efforts have been largely ineffective due to scale issues, perceived bias, and focus on individual artifacts rather than systems
Explanation
Caswell contends that the response to disinformation over the past 10-15 years has not been successful. He identifies multiple problems including ineffectiveness due to scale, alarmism, self-interest, political bias, and focusing on individual cases rather than systematic approaches.
Evidence
He cites that fact-checking provides ‘just a tiny, tiny drop in this vast ocean of information’ and mentions Mark Zuckerberg’s justification for turning off fact-checking at META due to perceived left-coding bias
Major discussion point
Challenges in Combating AI-Driven Disinformation
Topics
Human rights | Sociocultural | Legal and regulatory
Disagreed with
– Mikko Salo
Disagreed on
Effectiveness and bias of fact-checking approaches
AI can enable systematic rather than selective journalism coverage, processing vast amounts of digitally accessible information
Explanation
Caswell argues that AI offers the opportunity to move from selective news coverage to systematic coverage. Unlike human journalists who must choose specific stories due to resource constraints, AI can analyze all digitally accessible information in a given domain comprehensively.
Evidence
He provides the example of covering ‘the auto industry in Bavaria’ where AI could read ‘every single PDF’ and analyze ‘every single social media post by every single executive’ systematically, whereas journalists must make ‘newsworthiness decisions’ due to scarce resources
Major discussion point
Opportunities and Positive Applications of AI
Topics
Human rights | Sociocultural
Agreed with
– Chine Labbé
Agreed on
AI can be part of the solution when properly implemented with human oversight
Disagreed with
– Audience
Disagreed on
Role of AI in replacing original source journalism
AI can make civic information more accessible across different literacy levels, languages, and format preferences
Explanation
Caswell sees AI as enabling unprecedented accessibility of civic information by adapting content to individual needs. This could make relevant, societally beneficial information available to everyone at a much deeper level of personal relevance than previously possible.
Evidence
He mentions AI’s ability to adapt information ‘regardless of literacy or language or style preference or format preference or situation’ and the possibility of having ‘relevant, accessible, societally beneficial information available to everybody’
Major discussion point
Opportunities and Positive Applications of AI
Topics
Development | Human rights | Sociocultural
Agreed with
– Chine Labbé
Agreed on
AI can be part of the solution when properly implemented with human oversight
Chine Labbé
Speech speed
160 words per minute
Speech length
2147 words
Speech time
802 seconds
Malign actors increasingly use AI to generate sophisticated deepfakes, with Russian-Ukraine conflict showing 16-fold increase in deepfake quality and quantity
Explanation
Labbé demonstrates how AI has dramatically enhanced the creation of deepfakes and synthetic media. The quality and quantity of deepfakes has increased exponentially, making them more believable and harder to detect.
Evidence
During the first year of Russia-Ukraine war, only 1 out of 100 false claims was a deepfake (very pixelated, bad quality), but by the third year, there were 16 sophisticated, believable deepfakes. She also mentions a specific example of a deepfake showing someone falsely claiming sexual assault by Brigitte Macron
Major discussion point
AI’s Impact on Disinformation Creation and Spread
Topics
Cybersecurity | Human rights | Sociocultural
Agreed with
– David Caswell
Agreed on
AI has fundamentally transformed information creation and distribution at unprecedented scale
AI enables creation of entire networks of fake local news websites that appear credible but spread disinformation at unprecedented scale
Explanation
Labbé explains how AI tools are being used to create vast networks of fake news websites that mimic legitimate local news sources. These sites are entirely AI-generated and maintained, requiring minimal human oversight while appearing authentic.
Evidence
She cites John Mark Dugan, who created over 273 websites using AI, and mentions that NewsGuard found 1,271 AI content farms as of the time of speaking, up from just 49 in May 2023. A colleague created a propaganda site for just $105 in two days
Major discussion point
AI’s Impact on Disinformation Creation and Spread
Topics
Cybersecurity | Human rights | Sociocultural
Agreed with
– David Caswell
Agreed on
AI has fundamentally transformed information creation and distribution at unprecedented scale
AI chatbots authoritatively repeat false claims 26% of the time and cite known disinformation websites as sources
Explanation
Labbé presents evidence that AI chatbots frequently fail to distinguish between true and false information, presenting disinformation as fact. This represents a significant reliability problem as users increasingly turn to chatbots for information verification.
Evidence
NewsGuard’s monthly audits show chatbots repeat false claims authoritatively about 26% of the time. She provides examples including Grok repeating false claims about LA protests and Mistral citing known disinformation websites as sources. BBC’s experiment showed chatbots had significant problems in 10% of cases
Major discussion point
AI’s Impact on Disinformation Creation and Spread
Topics
Cybersecurity | Human rights | Sociocultural
AI creates vicious cycles where AI-generated false content gets validated by AI chatbots, creating self-reinforcing disinformation loops
Explanation
Labbé describes a problematic feedback loop where AI-generated disinformation gets published on fake websites, which are then cited by AI chatbots as authoritative sources. This creates a self-reinforcing system where false information appears increasingly credible.
Evidence
She explains the process: ‘information created by AI, generated, then repeated through those websites and validated by the AI chatbots, the really vicious circle of disinformation’ where chatbots ‘fail to recognize the fake sites that AI tools have contributed to creating’
Major discussion point
Challenges in Combating AI-Driven Disinformation
Topics
Cybersecurity | Human rights | Sociocultural
Malign actors exploit AI vulnerabilities through ‘LLM grooming’ – saturating web results with propaganda so chatbots will cite and repeat it
Explanation
Labbé reveals how sophisticated actors deliberately flood the internet with propaganda content specifically to influence AI training and responses. This represents a strategic approach to manipulating AI systems by corrupting their information sources.
Evidence
She describes the ‘Pravda Network’ with about 140 sites in over 40 languages publishing 3 million pieces of content yearly with ‘no audience’ but designed to ‘saturate the web results so that chatbots will use their content.’ Initial audits showed 33% success rate in getting chatbots to repeat their disinformation
Major discussion point
Challenges in Combating AI-Driven Disinformation
Topics
Cybersecurity | Human rights | Sociocultural
AI tools can assist in monitoring disinformation and deploying fact-checks at scale when humans remain in the loop
Explanation
Despite the challenges, Labbé acknowledges that AI can be part of the solution when properly implemented. AI can help scale up monitoring and fact-checking efforts, but requires human oversight to be effective.
Evidence
She mentions that ‘we’re using AI for monitoring, for deploying fact checks’ and that ‘we can even use generic TVI…to create, for example, new formats of presenting content, as long as the human is in the loop’
Major discussion point
Opportunities and Positive Applications of AI
Topics
Human rights | Sociocultural
Agreed with
– David Caswell
Agreed on
AI can be part of the solution when properly implemented with human oversight
Certification and labeling of credible information sources can help users identify trustworthy content in AI-mediated environments
Explanation
Labbé advocates for systems that certify and label credible information rather than just trying to identify false content. This positive approach helps users identify trustworthy sources in an increasingly complex information landscape.
Evidence
She mentions NewsGuard’s work and references ‘the Journalism Trust Initiative’ and ‘Trust My Content’ as examples of certification systems, stating ‘I think that’s a very positive way forward’
Major discussion point
Market and Consumer-Driven Solutions
Topics
Human rights | Sociocultural | Legal and regulatory
Consumer awareness and demand for truthful AI systems can drive industry improvements in accuracy and safety features
Explanation
Labbé argues that educating users about AI reliability problems will create market pressure for companies to improve their systems. As users become aware of the scale of misinformation issues, they will demand better accuracy from AI providers.
Evidence
She states ‘once users realize the scale of the issue…people will ask the platforms to do better. And at the end of the day, it’s a business. So if the users ask for more truth, then they’ll have to put in the guardrails’
Major discussion point
Market and Consumer-Driven Solutions
Topics
Economic | Human rights | Sociocultural
Agreed with
– Maria Nordstrom
Agreed on
Consumer awareness and market pressure can drive improvements in AI system reliability
AI companies currently prioritize new features over safety, but user pressure could shift this balance toward reliability
Explanation
Labbé observes that AI companies focus on developing attractive new features rather than ensuring accuracy and safety. However, she believes consumer demand could change these priorities if users prioritize reliability over novelty.
Evidence
She notes that ‘chatbots tend to do worse in month that they release new features’ because ‘the industry is focusing on efficiency, on new sexy features, but not on safety’ and that ‘safety takes the back seat when it comes to news’
Major discussion point
Market and Consumer-Driven Solutions
Topics
Economic | Human rights | Sociocultural
AI may allow traditional journalism to return to on-the-ground reporting by automating routine information processing tasks
Explanation
Labbé sees AI as potentially liberating journalists from routine desk work to focus on original reporting and human-centered stories. This could strengthen rather than replace traditional journalism by handling automated tasks.
Evidence
She explains that ‘having worked in newsroom most of my life…we often just didn’t have time or money to go on the ground and report’ but ‘with AI, it’ll allow journalism to go back to its roots and do more on-the-ground journalism’
Major discussion point
Opportunities and Positive Applications of AI
Topics
Human rights | Sociocultural
Agreed with
– David Caswell
Agreed on
AI can be part of the solution when properly implemented with human oversight
Maria Nordstrom
Speech speed
139 words per minute
Speech length
849 words
Speech time
366 seconds
The Council of Europe’s AI Framework Convention provides first legally binding global treaty addressing AI’s impact on human rights, democracy and rule of law
Explanation
Nordstrom explains that this treaty represents a landmark achievement in AI governance, being the first legally binding international agreement specifically addressing AI’s impact on fundamental democratic values. It’s global in scope, open to non-European countries as well.
Evidence
She notes it’s been signed by Japan, Switzerland, Ukraine, Montenegro and Canada beyond Council of Europe members, and ‘formulates fundamental principles and rules which safeguard human rights, democracy and the rule of law throughout the AI life cycle’
Major discussion point
Regulatory and Policy Responses
Topics
Human rights | Legal and regulatory
Finding balance between AI literacy education and maintaining public trust in information systems is a key policy challenge
Explanation
Nordstrom identifies a critical tension in policy-making: the need to educate people about AI risks without undermining their trust in information systems altogether. Too much skepticism could be as harmful to democracy as too little.
Evidence
She poses the key question: ‘to what extent is it beneficial for the society when all information is questioned? What does it do with democracy and what does it do with our agency when we can no longer trust the information that we see, that we read, that we hear?’
Major discussion point
Challenges in Combating AI-Driven Disinformation
Topics
Human rights | Sociocultural
Agreed with
– Olha Petriv
– Mikko Salo
Agreed on
Education and literacy are crucial for building resilience against AI-driven disinformation
Hard law regulation has limitations in requiring ‘truth’ from AI systems, making consumer empowerment and choice crucial
Explanation
Nordstrom acknowledges that while legal frameworks like the EU AI Act exist, it’s difficult to mandate truthfulness through regulation alone. This makes empowering consumers to make informed choices about AI systems particularly important.
Evidence
She states ‘when it comes to hard law, yeah, we have the AI Act in the EU, for example, but it’s hard to, by hard law, by regulation, require the truth’ and emphasizes helping consumers ‘make a conscious choice about which systems they are using’
Major discussion point
Regulatory and Policy Responses
Topics
Legal and regulatory | Economic | Human rights
Agreed with
– Chine Labbé
Agreed on
Consumer awareness and market pressure can drive improvements in AI system reliability
Olha Petriv
Speech speed
102 words per minute
Speech length
1208 words
Speech time
704 seconds
Children are particularly vulnerable to AI-generated disinformation, with Ukrainian refugee children’s faces being weaponized in deepfake campaigns
Explanation
Petriv highlights how children become both victims and unwitting spreaders of disinformation, particularly in conflict situations. She describes how children’s identities are exploited to create false narratives that undermine trust in refugee populations.
Evidence
She describes the ‘Matryoshka’ campaign where ‘a face of Ukrainian refugee girl was used to spread information that she doesn’t like different schools in the U.S.’ to create distrust of Ukrainian refugees, and mentions ‘disinformational bullying is spreading more and more in schools’
Major discussion point
AI’s Impact on Disinformation Creation and Spread
Topics
Human rights | Sociocultural | Cybersecurity
Self-regulation can serve as interim solution, with Ukrainian companies creating ethical AI codes of conduct while awaiting formal legislation
Explanation
Petriv describes Ukraine’s proactive approach of implementing self-regulation rather than waiting for formal laws. This bottom-up approach involves companies voluntarily adopting ethical AI principles and creating accountability mechanisms.
Evidence
She explains that 14 Ukrainian companies including Grammarly and SoftServe ‘created code of conduct that consists of eight main principles’ and established a ‘self-regulation body’ with annual reporting requirements on implementing ethical guidelines
Major discussion point
Regulatory and Policy Responses
Topics
Legal and regulatory | Economic
AI literacy education must start early, focusing on critical thinking and algorithm understanding rather than banning AI use by children
Explanation
Petriv advocates for teaching children how to use AI responsibly rather than prohibiting its use. She emphasizes that AI literacy should focus on developing critical thinking skills and understanding how AI systems work.
Evidence
She references UNESCO’s 2023 guidance that ‘AI skills is most so important for children, beginning from the algorithm understanding’ and emphasizes teaching children that ‘AI can be like a tool that helps them to find answers to their questions and to help them to ask more and more questions’
Major discussion point
Educational and Literacy Solutions
Topics
Human rights | Sociocultural | Development
Agreed with
– Maria Nordstrom
– Mikko Salo
Agreed on
Education and literacy are crucial for building resilience against AI-driven disinformation
Educational initiatives should help children understand AI as a tool while developing skills to resist disinformation
Explanation
Petriv argues that education should frame AI as a helpful tool while simultaneously building children’s capacity to identify and resist false information. This dual approach prepares children for an AI-integrated future while protecting them from manipulation.
Evidence
She mentions working on helping ‘children to develop the critical thinking through…AI companion’ and emphasizes that ‘if we will not teach children how to understand news and understand AI, somebody else will teach them how to think’
Major discussion point
Educational and Literacy Solutions
Topics
Human rights | Sociocultural | Development
Irena GrÃkova
Speech speed
138 words per minute
Speech length
2907 words
Speech time
1262 seconds
Three-pillar approach needed: fact-checking integration, human rights-by-design platform principles, and user empowerment strategies
Explanation
GrÃkova outlines the Council of Europe’s comprehensive strategy for combating disinformation through three interconnected approaches. This framework emphasizes both technical solutions and human-centered approaches to building resilience against false information.
Evidence
She details the three pillars: ‘fact-checking, calling for independent, transparency and financial sustainability by both states and digital platforms,’ ‘platform design’ with ‘human rights by design and safety by design principles,’ and ‘user empowerment’ including ‘initiatives at local level, community-based, and also collective’
Major discussion point
Regulatory and Policy Responses
Topics
Human rights | Legal and regulatory | Sociocultural
Mikko Salo
Speech speed
156 words per minute
Speech length
424 words
Speech time
162 seconds
Teachers need specific guidelines and support materials to effectively integrate AI literacy into education systems
Explanation
Salo emphasizes that educators require concrete guidance and resources to teach AI literacy effectively. Without proper support materials, teachers struggle with the complexity of AI-related topics and cannot adequately prepare students.
Evidence
He explains that ‘as a small NGO, we kind of pushed the government to do the guidelines for the teachers, because outside that, this kind of guidelines, the teachers are lost. It’s such a big thing’ and mentions creating ‘a kind of guide on AI for teachers’ that has been translated into English
Major discussion point
Educational and Literacy Solutions
Topics
Sociocultural | Development
Agreed with
– Maria Nordstrom
– Olha Petriv
Agreed on
Education and literacy are crucial for building resilience against AI-driven disinformation
Disagreed with
– David Caswell
Disagreed on
Effectiveness and bias of fact-checking approaches
Audience
Speech speed
211 words per minute
Speech length
458 words
Speech time
129 seconds
Using AI chatbots to teach children about AI literacy represents an innovative educational approach
Explanation
An audience member suggests that AI chatbots could be used as educational tools to teach children about AI itself. This meta-approach would use AI technology to help students understand AI capabilities and limitations.
Major discussion point
Educational and Literacy Solutions
Topics
Sociocultural | Development
Disagreed with
– David Caswell
Disagreed on
Role of AI in replacing original source journalism
Incentivizing AI service providers requires raising public awareness about the scale of misinformation problems in current systems
Explanation
An audience member argues that creating market incentives for more truthful AI systems depends on educating the public about existing problems. Only when users understand the scope of misinformation issues will they demand better accuracy from AI providers.
Major discussion point
Market and Consumer-Driven Solutions
Topics
Economic | Human rights | Sociocultural
Agreements
Agreement points
AI has fundamentally transformed information creation and distribution at unprecedented scale
Speakers
– David Caswell
– Chine Labbé
Arguments
AI has transformed information creation from artisanal to automated processes, fundamentally changing the information ecosystem
Malign actors increasingly use AI to generate sophisticated deepfakes, with Russian-Ukraine conflict showing 16-fold increase in deepfake quality and quantity
AI enables creation of entire networks of fake local news websites that appear credible but spread disinformation at unprecedented scale
Summary
Both speakers agree that AI represents a fundamental shift in how information is created and distributed, moving from manual/artisanal processes to automated systems that can operate at massive scale, though they focus on different aspects – Caswell on the general transformation and Labbé on malicious applications
Topics
Human rights | Sociocultural | Cybersecurity
Consumer awareness and market pressure can drive improvements in AI system reliability
Speakers
– Chine Labbé
– Maria Nordstrom
Arguments
Consumer awareness and demand for truthful AI systems can drive industry improvements in accuracy and safety features
Hard law regulation has limitations in requiring ‘truth’ from AI systems, making consumer empowerment and choice crucial
Summary
Both speakers recognize that while regulation has limitations, empowering consumers with knowledge and choice can create market incentives for AI companies to improve accuracy and reliability of their systems
Topics
Economic | Human rights | Legal and regulatory
Education and literacy are crucial for building resilience against AI-driven disinformation
Speakers
– Maria Nordstrom
– Olha Petriv
– Mikko Salo
Arguments
Finding balance between AI literacy education and maintaining public trust in information systems is a key policy challenge
AI literacy education must start early, focusing on critical thinking and algorithm understanding rather than banning AI use by children
Teachers need specific guidelines and support materials to effectively integrate AI literacy into education systems
Summary
All three speakers emphasize that education is fundamental to addressing AI disinformation challenges, though they highlight different aspects – the policy balance (Nordstrom), early childhood focus (Petriv), and teacher support needs (Salo)
Topics
Human rights | Sociocultural | Development
AI can be part of the solution when properly implemented with human oversight
Speakers
– David Caswell
– Chine Labbé
Arguments
AI can make civic information more accessible across different literacy levels, languages, and format preferences
AI can enable systematic rather than selective journalism coverage, processing vast amounts of digitally accessible information
AI tools can assist in monitoring disinformation and deploying fact-checks at scale when humans remain in the loop
AI may allow traditional journalism to return to on-the-ground reporting by automating routine information processing tasks
Summary
Both speakers acknowledge that despite the risks, AI offers significant opportunities to improve information systems – Caswell focuses on accessibility and systematic coverage, while Labbé emphasizes monitoring capabilities and freeing journalists for original reporting
Topics
Human rights | Sociocultural
Similar viewpoints
Both speakers recognize that current approaches to combating disinformation are inadequate and that AI exacerbates these problems by creating systemic issues rather than just individual false content pieces
Speakers
– David Caswell
– Chine Labbé
Arguments
Previous anti-disinformation efforts have been largely ineffective due to scale issues, perceived bias, and focus on individual artifacts rather than systems
AI creates vicious cycles where AI-generated false content gets validated by AI chatbots, creating self-reinforcing disinformation loops
Topics
Human rights | Sociocultural | Cybersecurity
Both speakers emphasize the critical importance of educational infrastructure and support systems for effectively teaching AI literacy, particularly focusing on practical implementation challenges
Speakers
– Olha Petriv
– Mikko Salo
Arguments
Educational initiatives should help children understand AI as a tool while developing skills to resist disinformation
Teachers need specific guidelines and support materials to effectively integrate AI literacy into education systems
Topics
Sociocultural | Development
Both speakers recognize the limitations of regulatory approaches alone and emphasize the importance of market-driven solutions through informed consumer choice and pressure
Speakers
– Chine Labbé
– Maria Nordstrom
Arguments
AI companies currently prioritize new features over safety, but user pressure could shift this balance toward reliability
Hard law regulation has limitations in requiring ‘truth’ from AI systems, making consumer empowerment and choice crucial
Topics
Economic | Legal and regulatory
Unexpected consensus
Self-regulation as viable interim solution
Speakers
– Olha Petriv
– Maria Nordstrom
Arguments
Self-regulation can serve as interim solution, with Ukrainian companies creating ethical AI codes of conduct while awaiting formal legislation
The Council of Europe’s AI Framework Convention provides first legally binding global treaty addressing AI’s impact on human rights, democracy and rule of law
Explanation
Despite representing different approaches (bottom-up self-regulation vs. top-down international treaty), both speakers see value in interim measures and voluntary compliance while formal legal frameworks develop. This suggests pragmatic consensus on multi-layered governance approaches
Topics
Legal and regulatory | Economic
AI chatbots as educational tools for AI literacy
Speakers
– David Caswell
– Olha Petriv
– Audience
Arguments
AI can make civic information more accessible across different literacy levels, languages, and format preferences
Educational initiatives should help children understand AI as a tool while developing skills to resist disinformation
Using AI chatbots to teach children about AI literacy represents an innovative educational approach
Explanation
There was unexpected enthusiasm across speakers for using AI itself as an educational tool to teach AI literacy. This meta-approach of using the technology to understand the technology represents innovative thinking that emerged during the discussion
Topics
Sociocultural | Development | Human rights
Overall assessment
Summary
The speakers demonstrated strong consensus on several key areas: the transformative scale of AI’s impact on information systems, the limitations of purely regulatory approaches, the critical importance of education and literacy, and the potential for AI to be part of the solution when properly implemented. There was also agreement on the need for multi-stakeholder approaches combining regulation, market incentives, and educational initiatives.
Consensus level
High level of consensus with complementary rather than conflicting perspectives. The speakers approached the topic from different angles (technical, policy, industry, civil society) but arrived at remarkably similar conclusions about both challenges and solutions. This suggests a mature understanding of the complexity of AI disinformation issues and the need for comprehensive, multi-faceted responses. The consensus has positive implications for developing coordinated international responses to AI disinformation challenges.
Differences
Different viewpoints
Effectiveness and bias of fact-checking approaches
Speakers
– David Caswell
– Mikko Salo
Arguments
Previous anti-disinformation efforts have been largely ineffective due to scale issues, perceived bias, and focus on individual artifacts rather than systems
Teachers need specific guidelines and support materials to effectively integrate AI literacy into education systems
Summary
Caswell argues that fact-checking has been ineffective and suffers from left-coding bias, citing Zuckerberg’s justification for ending fact-checking at META. Salo pushes back, suggesting that Zuckerberg’s position was politically motivated rather than evidence-based, and defends the work of fact-checkers who have written open letters explaining their case.
Topics
Human rights | Sociocultural | Legal and regulatory
Role of AI in replacing original source journalism
Speakers
– David Caswell
– Audience
Arguments
AI can enable systematic rather than selective journalism coverage, processing vast amounts of digitally accessible information
Using AI chatbots to teach children about AI literacy represents an innovative educational approach
Summary
An audience member questioned whether AI can truly replace journalism that requires original sources, particularly war journalism and personal stories that require human presence. Caswell acknowledged AI limitations but argued that much current journalism involves computer-based work that AI can handle, while the audience member emphasized the irreplaceable value of human-sourced reporting.
Topics
Human rights | Sociocultural
Unexpected differences
Trust versus skepticism balance in information literacy
Speakers
– Maria Nordstrom
– Irena GrÃkova
Arguments
Finding balance between AI literacy education and maintaining public trust in information systems is a key policy challenge
Three-pillar approach needed: fact-checking integration, human rights-by-design platform principles, and user empowerment strategies
Explanation
While both speakers work for institutions focused on protecting democratic values, they reveal a subtle but significant tension. Nordstrom worries that too much skepticism about information could undermine democracy itself, while GrÃkova suggests we may have entered an ‘era of mistrust’ that requires new approaches like public service AI chatbots. This disagreement is unexpected because it reveals philosophical differences about whether trust or skepticism should be the default stance in information literacy.
Topics
Human rights | Sociocultural
Overall assessment
Summary
The discussion revealed relatively low levels of fundamental disagreement among speakers, with most conflicts centered on implementation approaches rather than core goals. The main areas of disagreement involved the effectiveness of current fact-checking approaches, the extent to which AI can replace human journalism, and the balance between promoting healthy skepticism versus maintaining institutional trust.
Disagreement level
The disagreement level was moderate and constructive, with speakers generally building on each other’s points rather than opposing them. The most significant implication is that while there’s broad consensus on the problems AI poses for information integrity, there’s less agreement on solutions – particularly regarding the balance between technological fixes, regulatory approaches, and educational interventions. This suggests that policy development in this area will require careful coordination among different approaches rather than choosing a single strategy.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers recognize that current approaches to combating disinformation are inadequate and that AI exacerbates these problems by creating systemic issues rather than just individual false content pieces
Speakers
– David Caswell
– Chine Labbé
Arguments
Previous anti-disinformation efforts have been largely ineffective due to scale issues, perceived bias, and focus on individual artifacts rather than systems
AI creates vicious cycles where AI-generated false content gets validated by AI chatbots, creating self-reinforcing disinformation loops
Topics
Human rights | Sociocultural | Cybersecurity
Both speakers emphasize the critical importance of educational infrastructure and support systems for effectively teaching AI literacy, particularly focusing on practical implementation challenges
Speakers
– Olha Petriv
– Mikko Salo
Arguments
Educational initiatives should help children understand AI as a tool while developing skills to resist disinformation
Teachers need specific guidelines and support materials to effectively integrate AI literacy into education systems
Topics
Sociocultural | Development
Both speakers recognize the limitations of regulatory approaches alone and emphasize the importance of market-driven solutions through informed consumer choice and pressure
Speakers
– Chine Labbé
– Maria Nordstrom
Arguments
AI companies currently prioritize new features over safety, but user pressure could shift this balance toward reliability
Hard law regulation has limitations in requiring ‘truth’ from AI systems, making consumer empowerment and choice crucial
Topics
Economic | Legal and regulatory
Takeaways
Key takeaways
AI has fundamentally transformed information creation from artisanal to automated processes, creating both unprecedented risks and opportunities for democratic dialogue
Current anti-disinformation efforts have been largely ineffective due to scale limitations, with AI now enabling malign actors to create sophisticated disinformation campaigns at unprecedented scale and low cost
AI chatbots authoritatively repeat false claims 26% of the time, creating vicious cycles where AI-generated disinformation gets validated by AI systems themselves
Children are particularly vulnerable to AI-generated disinformation and require early AI literacy education focused on critical thinking rather than AI prohibition
The Council of Europe’s AI Framework Convention provides the first legally binding global treaty addressing AI’s impact on human rights, democracy and rule of law
Consumer awareness and demand for truthful AI systems can drive industry improvements, as AI companies currently prioritize new features over safety and accuracy
AI offers opportunities for systematic rather than selective journalism coverage and can make civic information more accessible across different populations
Self-regulation by AI companies can serve as an interim solution while formal legislation is being developed
The fundamental challenge is preserving democratic institutions built on trust while navigating an era of widespread information mistrust
Resolutions and action items
Develop AI literacy educational materials and programs, including innovative approaches like using AI chatbots to teach children about AI
Create certification and labeling systems for credible information sources and AI systems to help users identify trustworthy content
Preserve and strengthen primary source journalism as the foundation for AI-based secondary journalism
Implement the Council of Europe’s three-pillar approach: fact-checking integration, human rights-by-design platform principles, and user empowerment strategies
Develop risk and impact assessment methodology for AI systems affecting democratic processes through the Council of Europe’s multi-stakeholder process
Raise consumer awareness about AI misinformation issues to drive market demand for more reliable AI systems
Support ratification and implementation of the Council of Europe’s AI Framework Convention
Invest in strengthening public service media and regulatory authorities’ capabilities to navigate the digital environment
Unresolved issues
How to find the right balance between AI literacy education and maintaining public trust in information systems without further eroding confidence
What is the optimal age and methodology for teaching children about AI and disinformation resistance
How to effectively regulate AI systems to require truthfulness when hard law has limitations in mandating ‘truth’
How to address the demand side of disinformation – making users seek and consume quality information even when available
How to scale fact-checking and content moderation to match the volume of AI-generated content
Whether AI can truly replace primary source journalism, particularly for on-ground reporting and original source gathering
How to prevent the fragmentation of shared narratives while enabling personalized AI-mediated information experiences
How to demonetize the disinformation economy and cut off financial incentives for spreading false information
Suggested compromises
Use AI as a tool to enhance rather than replace human journalism, allowing traditional media to focus on on-ground reporting while AI handles routine information processing
Implement hybrid approaches where humans remain in the loop for AI-assisted fact-checking and content moderation
Develop both hard law regulation (like the EU AI Act) and soft law measures (like industry self-regulation) to address different aspects of the AI disinformation challenge
Focus on empowering consumers to make informed choices about AI systems rather than attempting to regulate truth directly
Combine systematic AI-enabled information coverage with preservation of identity-based publications that serve community-building functions
Pursue both supply-side solutions (reducing harmful content production) and demand-side solutions (improving user resilience and critical thinking)
Thought provoking comments
We basically changed our information ecosystem from a one-to-many, or more accurately, a few-to-many shape, to a many-to-many shape… And this was the technical change that caused this cascade of activity over the last 15 years, including around disinformation and misinformation.
Speaker
David Caswell
Reason
This comment provides a fundamental framework for understanding the root cause of our current information crisis. Rather than focusing on symptoms, Caswell identifies the structural transformation that enabled mass disinformation – the democratization of mass communication itself.
Impact
This framing shifted the discussion from treating AI as the primary problem to understanding it as the latest evolution in a longer transformation. It established a historical context that influenced how other panelists discussed solutions, moving beyond reactive measures to systemic thinking.
I think there’s another deep risk that really is underappreciated here, which is that as we start to use these models as sort of core intelligence for our societies, that there are biases in these models… Elon Musk… has just recently announced that they’re going to use Grok to basically rebuild the archive on which they train the next version of Grok. So they’re going to write a new history, basically, of humanity, and then use that to train Grok.
Speaker
David Caswell
Reason
This insight reveals a terrifying feedback loop where AI systems don’t just reflect existing biases but actively reshape the information foundation of society. The Grok example illustrates how powerful actors can literally ‘rewrite history’ at the training data level.
Impact
This comment introduced a new dimension of concern that went beyond traditional content moderation discussions. It elevated the conversation to existential questions about truth and reality, influencing later discussions about the need for systematic approaches and public oversight.
So you have information created by AI, generated, then repeated through those websites and validated by the AI chatbots, the really vicious circle of disinformation.
Speaker
Chine Labbé
Reason
This identifies a critical self-reinforcing mechanism where AI-generated false information becomes ‘validated’ by other AI systems, creating an ecosystem of synthetic credibility that’s increasingly difficult to detect or counter.
Impact
This observation shifted the discussion from viewing AI as a tool that could be controlled to understanding it as creating autonomous disinformation ecosystems. It reinforced the urgency around developing systematic solutions rather than piecemeal approaches.
The important question here is, to what extent is it beneficial for the society when all information is questioned? What does it do with democracy and what does it do with our agency when we can no longer trust the information that we see, that we read, that we hear?
Speaker
Maria Nordstrom
Reason
This comment captures a fundamental paradox: efforts to combat disinformation through skepticism and education may inadvertently erode the shared trust that democracy requires. It highlights the delicate balance between critical thinking and social cohesion.
Impact
This shifted the conversation from purely technical solutions to philosophical questions about the foundations of democratic society. It influenced the moderator’s later observation about entering an ‘era of mistrust’ and shaped discussions about preserving trusted institutions.
And it’s not just a parental issue, it’s a generation’s lost potential… if we will not teach children how to understand news and understand AI, somebody else will teach them how to think.
Speaker
Olha Petriv
Reason
This reframes AI literacy education as an urgent societal imperative rather than an individual responsibility. The phrase ‘somebody else will teach them how to think’ powerfully captures the stakes of inaction in a world where malicious actors are actively exploiting AI.
Impact
This comment elevated the discussion of education from a nice-to-have to an existential necessity. It influenced other speakers to focus on practical implementation of AI literacy programs and sparked innovative ideas like using AI chatbots to teach AI literacy.
I think there’s a new deep need… we’ve depended for 400 years on these mechanisms, like the scientific method or like journalism… they’re truth-seeking and they’re also self-correcting… I think they also need to be deterministic rather than probabilistic.
Speaker
David Caswell
Reason
This insight identifies a fundamental incompatibility between how democratic institutions have historically operated (deterministic, verifiable, persistent) and how AI systems work (probabilistic, opaque, ephemeral). It suggests our entire epistemological framework may need updating.
Impact
This comment introduced a deeper philosophical dimension to the technical discussion, influencing conversations about the need for new institutional frameworks and the importance of preserving traditional journalistic methods alongside AI innovation.
Overall assessment
These key comments transformed what could have been a typical ‘AI is dangerous/helpful’ discussion into a sophisticated analysis of systemic challenges to democratic epistemology. Caswell’s historical framing established that we’re dealing with the latest phase of a longer transformation, while Labbé’s practical examples grounded abstract concerns in measurable realities. Nordstrom’s philosophical questioning and Petriv’s urgency about education elevated the stakes from technical problems to civilizational challenges. Together, these insights shifted the conversation from reactive problem-solving to proactive system design, emphasizing the need for new institutional frameworks, educational approaches, and governance mechanisms that can preserve democratic dialogue in an AI-mediated information ecosystem. The discussion evolved from cataloging problems to envisioning solutions that address root causes rather than symptoms.
Follow-up questions
How do you fact check your way out of a completely new alternative reality created by AI?
Speaker
Irena GrÃkova
Explanation
This addresses the fundamental challenge of verifying information when AI can create entire plausible but false narratives at scale, making traditional fact-checking approaches insufficient
What is the right age to teach children about AI, and how should this education be structured?
Speaker
Mikko Salo
Explanation
This is crucial for developing AI literacy programs, as children need to understand critical thinking before using AI as a supportive tool, and the approach may be culturally dependent
To what extent is it beneficial for society when all information is questioned? What does it do with democracy and our agency when we can no longer trust the information we see?
Speaker
Maria Nordstrom
Explanation
This addresses the balance between healthy skepticism and the erosion of trust that could undermine democratic institutions and individual agency
How can we incentivize AI service providers to be more grounded on truth and reality?
Speaker
Jun Baek
Explanation
This explores market-based and regulatory approaches to encourage AI companies to prioritize accuracy over other features like efficiency or novelty
How do you motivate AI companies to comply with self-regulation?
Speaker
Irena GrÃkova
Explanation
This examines the mechanisms needed to ensure voluntary compliance with ethical AI standards in the absence of binding regulations
What does ‘systematic versus selective’ news coverage mean in the context of AI journalism?
Speaker
Irena GrÃkova
Explanation
This seeks clarification on how AI could transform journalism from resource-constrained selective reporting to comprehensive systematic coverage of information domains
Can AI really replace original source journalism, especially in areas requiring human presence like war journalism or personal stories?
Speaker
Frances (YouthDIG)
Explanation
This questions the limits of AI in journalism and the continued need for human reporters in certain contexts that require physical presence and human connection
How can we develop certification or ranking systems for AI chatbots to help users identify trustworthy sources?
Speaker
Irena GrÃkova
Explanation
This explores the need for user-friendly systems to evaluate and compare the reliability of different AI information sources
Should we develop public service AI chatbots trained only on reliable data?
Speaker
Irena GrÃkova
Explanation
This considers whether governments should provide trustworthy AI information services as a public good, similar to public service media
How can we preserve and strengthen primary source journalism as the foundation for AI-based secondary journalism?
Speaker
Irena GrÃkova
Explanation
This addresses the need to maintain human-generated original reporting to prevent the information ecosystem from becoming entirely virtual and self-referential
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Day 0 Event #270 Everything in the Cloud How to Remain Digital Autonomous
Day 0 Event #270 Everything in the Cloud How to Remain Digital Autonomous
Session at a glance
Summary
This discussion at the Internet Governance Forum focused on digital autonomy in the context of cloud computing concentration among major global providers. The panel examined concerns about strategic dependencies on a handful of predominantly US-based cloud services like AWS, Microsoft Azure, and Google Cloud, and their implications for national security, data protection, and digital resilience.
Anke Sikkema from the Netherlands Ministry of Economic Affairs outlined her country’s Digital Open Strategic Autonomy (DOSA) agenda, which aims to remain globally open while addressing strategic dependencies in the digital sector. She described recent debates in the Netherlands, including concerns when their domain registry provider wanted to move to Amazon Web Services, leading to parliamentary initiatives and revised government cloud policies. The Dutch approach emphasizes being “open where possible and protective when necessary.”
Jeff Bullwinkel from Microsoft acknowledged the legitimacy of sovereignty concerns while emphasizing that trust is fundamental to technology adoption. He outlined Microsoft’s European digital commitments, including doubling infrastructure capacity in Europe by 2027, implementing an EU data boundary, and committing to resist government orders to seize or suspend cloud services. He stressed the importance of focusing on innovation opportunities at the model and application layers, not just infrastructure.
Agustina Brizio provided a Global South perspective, highlighting how Latin American countries face particular challenges with hyperscaler dependency and limited regulatory capacity. She advocated for multi-cloud architectures, investment in local cloud capabilities, and stronger public procurement policies that include data localization and transparency requirements. She emphasized the need for more democratic governance mechanisms for cloud infrastructure, viewing it as a public good requiring social oversight.
The discussion concluded that addressing cloud concentration requires balanced approaches combining protection, promotion, and partnership among governments, industry, and civil society stakeholders.
Keypoints
## Major Discussion Points:
– **Cloud Market Concentration and Digital Autonomy Concerns**: The discussion centered on how the dominance of a few major cloud providers (primarily US-based like AWS, Microsoft Azure, Google Cloud) creates strategic dependencies that raise concerns about national security, data protection, and digital sovereignty for countries worldwide.
– **Regional Approaches to Cloud Sovereignty**: Different regions are taking varied approaches – Europe is developing initiatives like GAIA-X and sovereign cloud policies, while Latin America faces greater challenges due to power imbalances and regulatory gaps when dealing with hyperscale cloud providers.
– **Trust, Transparency and Accountability in Cloud Services**: The conversation emphasized that trust is fundamental to cloud adoption, with discussions about how cloud providers can maintain trust through transparency, accountability measures, and compliance with local laws and regulations.
– **Multi-stakeholder Governance and Shared Responsibility**: Speakers highlighted the need for collaborative approaches involving governments, private sector, civil society, and academia to address cloud governance challenges, rather than relying solely on government regulation or corporate self-regulation.
– **Balancing Innovation Benefits with Sovereignty Concerns**: The discussion explored how to maintain the significant benefits of cloud computing (efficiency, scalability, innovation) while addressing legitimate concerns about data control, security, and national digital autonomy.
## Overall Purpose:
The discussion aimed to explore the complex challenges of digital autonomy in an era dominated by concentrated cloud computing markets. The goal was to examine how different stakeholders can work together to manage strategic, regulatory, and operational risks while preserving the benefits of cloud services and fostering more diverse, secure, and locally accountable cloud ecosystems.
## Overall Tone:
The discussion maintained a balanced and constructive tone throughout. It began with acknowledgment of legitimate concerns about cloud concentration, evolved into a nuanced exploration of different regional perspectives and approaches, and concluded with a collaborative spirit emphasizing shared responsibility. The speakers were respectful of different viewpoints, with government and industry representatives acknowledging each other’s concerns while civil society voices provided critical but constructive perspectives from the Global South. The tone remained professional and solution-oriented rather than adversarial.
Speakers
– **Jenna Fung**: Program Director of NetMission.Asia, a youth-focused network in Asia Pacific dedicated to engaging and empowering young people in internet governance. Served as on-site moderator for the session.
– **Anke Sikkema**: Deputy Director, Digital Economy, Netherlands Ministry of Economic Affairs. Studied international relations at the University of Groningen in the Netherlands and started her career in Brussels as European Policy Advisor at VNO-NCW.
– **Jeff Bullwinkel**: Vice President and Deputy General Counsel, Corporate, External and Legal Affairs of Microsoft. Focuses on the company’s legal and corporate affairs across Europe, the Middle East and Africa.
– **Agustina Brizio**: Lawyer with a master’s degree in public policy, currently pursuing an MPA in digital technologies and policy at University College London. Innovations and digital technologies manager at Asuntos de Sol, where she leads efforts to strengthen democracy through inclusive tech policies in global south. Focuses on intersections of technologies, equity, and governance with particular interests in AI, cybersecurity, and digital rights.
– **Corinne Katt**: Head of Team Digital at Human Rights NGO Article 19, and a recovering postdoc who wrote work on the political economy of cloud.
Additional speakers:
None identified beyond the speakers names list.
Full session report
# Digital Autonomy in the Cloud: Navigating Sovereignty and Innovation in an Era of Market Concentration
## Executive Summary
This discussion at the Internet Governance Forum’s Day Zero Event 270 examined the complex challenges surrounding digital autonomy in the context of concentrated cloud computing markets. The panel brought together diverse perspectives from government policy makers, industry representatives, civil society advocates, and Global South voices to explore how the dominance of a handful of predominantly US-based cloud providers affects national sovereignty, democratic governance, and innovation opportunities worldwide.
The conversation explored practical approaches that balance multiple competing values: sovereignty versus openness, security versus innovation, and local control versus global efficiency. All participants acknowledged the legitimacy of concerns about strategic dependencies on major cloud providers like AWS, Microsoft Azure, and Google Cloud, whilst recognising the substantial benefits these services provide for economic efficiency, innovation, and cybersecurity.
## Key Participants and Perspectives
The discussion was moderated by **Jenna Fung**, Program Director of NetMission.Asia, who noted this was her “first time moderating an open forum in this setting.” She opened by explaining key concepts including data residency (where data is physically stored), data sovereignty (legal control over data), and digital autonomy (ability to make independent decisions about digital infrastructure).
**Anke Sikkema**, Deputy of Directors of Digital Economy at the Netherlands Ministry of Economic Affairs, presented the European governmental perspective through her country’s Digital Open Strategic Autonomy (DOSA) agenda. Her approach emphasised being “open to the outside world where possible and protective when necessary.”
**Jeff Bullwinkel**, Vice President and Deputy General Counsel at Microsoft, provided the industry perspective whilst acknowledging the legitimacy of sovereignty concerns. He emphasised trust as fundamental to technology adoption, citing “a famous Dutch statesman in the 18th century” who said that “trust arrives on foot and leaves on horseback.”
**Agustina Brizio**, representing Global South perspectives as an innovations and digital technologies manager at Asuntos de Sol, highlighted the particular challenges faced by Latin American countries in dealing with hyperscale cloud providers, emphasising power imbalances and the need for more democratic governance mechanisms.
**Corinne Katt** from Article 19 contributed perspectives on the intersection of cloud computing with human rights and democratic oversight, particularly regarding concerns about government access to data.
## The Netherlands’ Digital Open Strategic Autonomy Approach
Sikkema outlined the Netherlands’ comprehensive approach to addressing cloud dependencies through their DOSA agenda, which emerged from growing concerns about strategic dependencies in the digital sector. A key catalyst was when SIDN, the .nl domain provider, wanted to migrate to Amazon Web Services, sparking national parliamentary debate about digital sovereignty.
This led to revised government cloud policies and the “Clouds on the Horizon” bill, drafted by Dutch parliament members with multi-stakeholder input. The Dutch approach centres on three key governmental roles: protecting through legislation, promoting innovation, and fostering partnerships.
Sikkema highlighted European Union initiatives including the Data Act and Digital Markets Act as regulatory tools that help mitigate risks whilst promoting investment. She also discussed collaborative European projects like Gaia-X and Eurostack, which aim to build collaborative digital infrastructure that provides alternatives to complete dependency on global hyperscalers whilst maintaining interoperability.
## Industry Perspective: Trust, Security, and Innovation
Bullwinkel acknowledged the legitimacy of sovereignty concerns whilst emphasising that trust forms the foundation of all technology adoption decisions. He announced Microsoft’s new sovereign cloud approach, revealed “just last week” in Amsterdam by CEO Satya Nadella, including plans to increase European capacity by 40% and double capacity between 2023-2027, implement an EU data boundary, and commit to resisting government orders to seize or suspend cloud services.
He outlined Microsoft’s security capabilities, noting that the company processes “77 trillion different signals” daily for cybersecurity and can invest in security at levels that exceed individual government capabilities. He used the example of Ukraine, which suspended its law requiring government data to be stored within borders during the conflict, achieving better data sovereignty by dispersing digital assets across Europe when Russian missiles targeted local data centres.
Bullwinkel also highlighted successful European companies like Mistral AI and Hugging Face as examples of innovation happening at the model and application layers using hyperscale infrastructure. He mentioned Microsoft’s partnerships with European companies including Leonardo in Italy, Proximus in Belgium, and Telefonica in Spain.
## Global South Challenges and Democratic Governance
Brizio provided crucial perspectives from Latin American countries, highlighting how Global South nations face particular challenges with hyperscaler dependency due to limited regulatory capacity and significant power imbalances. She cited Argentina’s experience with Arsat, a telecoms provider with data center capabilities, as an example of regional approaches to building local capacity.
Her arguments focused on the democratic governance implications of cloud concentration, noting that vendor lock-in prevents governments from making autonomous decisions and undermines democratic oversight. She advocated for multi-cloud architectures as a strategy to prevent vendor lock-in and maintain governmental decision-making autonomy.
Brizio called for stronger public procurement policies that include data localisation and transparency requirements, viewing these as powerful tools for governments to influence cloud provider behaviour. She emphasised the need for investment in national and regional cloud capabilities to support local innovation ecosystems.
Her most significant contribution was reframing cloud infrastructure as a public good requiring social oversight, calling for more democratic governance mechanisms that move beyond technical expert discussions to include broader societal stakeholders.
## Trust and Transparency Challenges
A recurring theme throughout the discussion was the fundamental challenge of building and maintaining trust in cloud services. Katt raised specific concerns about the US Cloud Act and its implications for European data protection, questioning whether company commitments to resist government data requests can overcome fundamental legal framework issues.
The discussion revealed that trust-building requires ongoing attention and cannot be easily resolved through technical measures alone. Different speakers proposed different approaches: corporate commitments and technical solutions versus stronger regulatory frameworks and democratic oversight mechanisms.
## Innovation and Economic Opportunities
Despite sovereignty concerns, all speakers recognised the significant innovation and economic opportunities created by cloud computing, particularly in the era of artificial intelligence. Bullwinkel emphasised that innovation opportunities exist across infrastructure, model, and application layers, arguing that European companies are successfully building innovative services on hyperscale infrastructure.
Other speakers argued that some level of infrastructure independence is necessary to support local innovation ecosystems and maintain decision-making autonomy. This tension between leveraging global capabilities and building local capacities highlighted the complexity of balancing innovation with sovereignty concerns.
## Regional Approaches and Collaboration
The discussion revealed different regional approaches to addressing cloud sovereignty challenges. European initiatives like Gaia-X and Eurostack represent collaborative approaches to building alternative infrastructure whilst maintaining interoperability with global services.
The Global South faces different challenges, with limited resources for building independent infrastructure but greater vulnerability to power imbalances with hyperscale providers. Brizio’s emphasis on hybrid public-private models and regional cooperation suggested potential pathways for developing countries to achieve some autonomy whilst leveraging global capabilities.
Bullwinkel mentioned his recent visits to African countries including Kenya, Egypt, Nigeria, Tanzania, and Rwanda, noting the diverse approaches being taken across different regions.
## Practical Solutions and Recommendations
Several practical solutions emerged from the discussion. Multi-cloud architectures were widely supported as a strategy to prevent vendor lock-in and maintain decision-making flexibility. Public procurement policies were identified as powerful tools for incorporating sovereignty requirements into government cloud adoption.
Investment in national and regional cloud capacities was seen as important for supporting local innovation ecosystems, though speakers acknowledged the challenges of building sustainable technical capacity and stable policy frameworks, particularly in developing countries.
The discussion also highlighted the importance of open and interoperable standards as alternatives to complete technological independence, allowing organisations to maintain flexibility whilst benefiting from global innovation.
## Multi-Stakeholder Governance
The conversation demonstrated the importance of multi-stakeholder approaches to cloud governance. All speakers agreed on the need for collaboration between governments, industry, and civil society, though they differed on implementation approaches.
The discussion highlighted that effective multi-stakeholder governance requires creating structures that provide meaningful participation and decision-making power to all relevant parties, including civil society and affected communities, rather than simply gathering stakeholders around a table.
## Conclusion
The discussion concluded with recognition that addressing cloud concentration requires balanced approaches combining protection, promotion, and partnership amongst governments, industry, and civil society stakeholders. The conversation moved beyond binary thinking about dependence versus independence to explore nuanced frameworks for managing trade-offs contextually.
As Fung noted in closing, digital autonomy can be “a slippery slope sometimes,” requiring careful navigation of competing interests and values. The speakers agreed that the conversation about digital autonomy should continue in national and regional contexts, with each stakeholder group taking specific steps to achieve meaningful digital autonomy whilst preserving the benefits of cloud computing for innovation, efficiency, and security.
The conversation ultimately reframed cloud governance from a purely technical or economic issue to a fundamental question about how societies should govern critical digital infrastructure in an interconnected world, emphasising the need for democratic participation and oversight in decisions that affect everyone’s digital future.
Session transcript
Jenna Fung: Testing. Can you guys hear me? Awesome, everyone. Hello. Welcome to Day Zero Event 270, Everything in the Cloud, How to Remain Digital Autonomous. I’m your on-site moderator today. My name is Jenna Fung, Program Directors of NetMission.Asia, a youth-focused network in Asia Pacific dedicated to engaging and empowering young people in internet governance. It is my first time moderating an open forum in this setting with the headphones on, and it’s kind of weird to hear your own voice when you moderate too. So please bear with me if I make any mistake, but welcome. I’m glad you guys decided to join today’s sessions, what I’m sure will be a dynamic and important conversation because this is a very interesting topic that interested me these days. And I believe it’s the same to all of you because you decided to attend this meeting and be with us today out of all the session. As we know, cloud computing is now the backbone of today’s digital economy. We encounter it without even noticing it, from backing up your photos from our phones, using smart home devices, or video doorbell, or even use tools like Google Docs or Zoom calls to prepare for your very IGF workshop coordination too. So we encounter cloud services these days even without realizing it, but as more and more of our infrastructure services and data move to the cloud, we are also starting to ask tougher questions, especially… about who controls these systems and what that means for national security, data protections, or even long-term digital resiliency. So right now, much of the world relies heavily on a few handful of major cloud providers, like mostly also US-based as well, like AWS, Microsoft, Azure, and Google Cloud. Of course, a lot of amazing work has been done. But recently also Europe and some other regions starting to have their different concerns about their strategic dependency on cloud usage. And in Asia, we are seeing more mixed pictures with some countries also turning to regional providers like Alibaba or Tencent, which they are mostly Chinese company. But only a few countries managed to build a robust, strong domestic cloud ecosystem to encounter the emerging and ever-changing environment that we are in today. So now to set stage for today’s discussions, I don’t know if you guys get a chance to read the titles again and again, because to me, it is very complicated. And it’s a concept that we have to unpack before we can construct a conversation that’s meaningful for us, where we can bring this topic back to our own country for further discussion. So I would like to briefly surface a few concepts before I start introducing our speaker today, which they will help us to get through all those questions we have. Terms like data residency, we might have heard before where people talked about data store or process physically within certain national borders. And later on, people also talked about data sovereignties, which elevate the concept into legal realm, signifies like data is not only physically be within a country, but also subject to certain countries’ laws and regulations. And ultimately, The aspirations in this domain is our topic today, digital autonomy. And so, we want to explore this bigger picture together. What does this concentration of cloud powers of a certain handful of great services that provides cloud services means for digital autonomy and what role should government play in ensuring security and compliance, especially when it comes to sensitive data, whether it’s personal or national. So, should countries be investing in sovereign cloud initiatives? These questions I want to plant in your head as we started this conversation later on. We’ll also have an open floor Q&A to the audience. But now, allow me to introduce our speaker. So, to my right is Anke, Deputy of Directors, Digital Economy, Netherlands Ministry of Economic Affairs. She studied international relations at the University of Groningen in the Netherlands and she started her career in Brussels as European Policy Advisor at VNO-NCW. And later on, returned to The Hague, where she is a civil servant at the Ministry of Economic Affairs. Now, of course, work as a Deputy of Digital Economy, Netherlands Ministry of Economic Affairs. Welcome, Anke. And to my right is Jeff from Microsoft. Jeff is the Vice President and Deputy General Counsel, Corporate, External and Legal Affairs of Microsoft. He’s been actually working in many different regions before, but now mainly focusing on their companies’ legal and corporate affairs across Europe, the Middle East and Africa. So, later on, we’ll hear from Jeff, who might have, you know, happen to be able to share more insights from, you know, a broader regional perspective. And online we have Agustina just checking with our online moderators if Agustina is with us. Awesome. Agustina is a lawyer with a master’s degree in public policy, currently pursuing an MPA in digital technologies and policy at University College London. She is innovations and digital technologies manager at Asuntos de Sol, where she leads efforts to strengthen democracy through inclusive tech policies in global south. And she recently did some research focusing on the intersections of technologies, equity, and governance with particular interests in AI, cybersecurity, and digital rights. So we get a really good panel today and I want to pass it over to our speaker because I’ve been speaking a lot already. We have plenty of our policy questions, many of them very compact, and I believe you can access them on the IGF website as well. First question actually to start with, to what extent is the current concentrations of the cloud computing market among global providers a problem for digital autonomy, security, and or innovation? Very compact questions, but I am aware that in Europe, for instance, the Netherlands have started some initiative around a more sovereign cloud initiative. And I may now perhaps pass it over to Anka to share some experience that you might have working in this area and then we can unpack and try to answer that question together.
Anke Sikkema: Okay, thank you very much. I think I take it off. Yes, easier. Yes, thank you very much. And I think it’s a good opportunity to be here and discuss these questions with you here at the IGF in this multi-stakeholder setting. Because there’s a lot to say, I don’t have the answers, and that’s why we’re actually here to discuss this all together from different angles. And what I can tell in response to the first question is a bit more about the situation in the Netherlands. Before I start, I’d like to stress, because we will talk about concerns mainly, I would like to stress the benefits of cloud for economy and for society as a whole. It’s an efficient way to storage data, it’s user-friendly, and we can’t think of a world without it. But however, we had some quite extensive debates lately about strategic autonomy in general, and more specifically, dependencies in the cloud market back home. And I would like to take you to 2023, when our government published a so-called DOSA agenda. And DOSA is an acronym for Digital Open Strategic Autonomy. We’ve been thinking about this for a longer time. And in this acronym, the word open is deliberately in this agenda, because we think it’s important to act globally as open as possible. But on the other hand, it aims to address strategic dependencies in the digital sector, and cloud is one of the priorities in this agenda. So the idea is to see where these dependencies are in which technologies. What happened is that research agencies in the Netherlands indicated that the market position of European… players is relatively weak, I don’t think it’s a surprise to say this anymore, in the cloud market. And another thing is that the use of cloud entails certain risks when it comes to maintaining control and access to sensitive and protected data. So it’s extra important when we talk about government data. What we see a bit broader than only in the Netherlands is in the EU. Of course, we are part of the EU and the EU has taken important legislative steps to protect and to mitigate the risks. So you can think about the European Data Act, the Data Governance Act and the Digital Markets Act. But on the other hand, it also started more to promote and to invest in projects in cloud and data infrastructure. So these two parts happened. And then in 2024, we had two interesting things. And I talk a bit from the government perspective and from the political world I work in. First of all, our country code top-level domain provider, which is SIDN from the .nl, announced that it wanted to move its domain registration system to Amazon Web Services, because its current system was becoming outdated and needed to renew. But this actually started a wide national debate, because people thought even technically advanced organizations as SIDN are not were not able to run certain systems in a European cloud party. So our minister was asked to investigate this case, and that’s actually what we did. And secondly, two members of Dutch parliament presented a draft bill with the catchy title Clouds on the Horizon. They expressed their concern regarding the dependency of the Dutch society and the Dutch government on big U.S. cloud providers. So this bill was drafted, and I think it’s good to mention here as well that that was also kind of a multi-stakeholder way of organizing this, because it was a group of stakeholders who drafted this together with those members of parliament from the technical community, civil society, academia, and from business community. And we see that the change of the geopolitical landscape led to a revised cloud policy when it comes specifically to use of government data in the cloud. So what we would like to say is that still, when I come back to the first agenda that I mentioned, so this DOSA agenda, Digital Open Strategic Autonomy, we say open to the outside world where possible and protective when necessary. So that’s how we look at it right now. Thank you.
Jenna Fung: Thank you, Anke, for sharing. Better to take it off. Thank you, Anke, for sharing the perspective from the Netherlands government. Now I actually would like to turn the same questions over to Jeff. Slightly different, because like I said, this is like a really compact situation. And also, as Anke mentioned earlier, it is concern and risk many of the time about how to control the data. And I saw you took a lot of notes, and you probably have some insights. But I want to add a little bit of the context on top of the questions. You know, in these days, especially with AI being so, you know, proliferating, and it is growing every single day without us being able to predict how exactly it is going. and many sees that as like amplifying some global concerns especially around this topic. I wonder if you could speak to how you know such a big service cloud service provider like Microsoft you know in the situation what’s their roles and responsibilities as someone especially not only businesses but some some of the governments also rely on your system because one of the examples is Canadian government use your system. So I would like to turn it over to you.
Jeff Bullwinkel: Well thank you Jenna for the introduction and for the chance to be here as well. Thanks to people in the room and those joining online for what I think is really a very timely conversation and a very important conversation and grateful to be sharing the platform here with Anke and Agustina as well online. I say a few things in response to that question Jenna perhaps building a bit on what Anke said as well and the first is to acknowledge very clearly that the concerns that we are hearing about today are very natural and understandable and appropriate frankly and I think it’s just very important to state that very directly. I would say that in many respects the concerns that we hear about around data privacy, data security, data and digital sovereignty are not new concerns. In many respects I think these concerns have been with us for a long time. I mean we can think back even to over now over 10 years ago whenever Snowden first fled the U.S. with four laptops and went to Hong Kong initially and then on to Russia and at that time you know trust became an issue and trust in technology frankly speaking became an issue and I think that we’re seeing some of those concerns around trust on the surface again in a very clear way and so while I would say that the concerns are not perhaps new around again sovereignty, autonomy and the like they are I think more pronounced perhaps and more frequent. in terms of how they come up in our own conversations given what is, after all, a relatively volatile geopolitical environment. And so that certainly does cause us as a company, as a major provider of services, to think hard about what we need to do to maintain trust. And trust really is the critical element here. And I had the privilege, by the way, Anke, of living in your great country for five years. And I learned at that time that a famous Dutch statesman in the 18th century once said that trust arrives on foot and leaves on horseback, which I think is absolutely true when it was said and certainly true now. And so we think about the fact that people will simply not use technology they don’t trust, and that guides everything that we do. And so I would say that as a company, we have a high degree of responsibility to think about these issues in the right way. In relation to AI, as you say, Jenna, this is the era of AI after all. I mean, it is here. It’s not new either, by the way, but it certainly is having a profound impact on every aspect of society in really very exciting ways. But again, the trust issue certainly looms large. It was eight years ago, in fact, that Microsoft as a company decided to articulate a set of responsible AI principles that were at that time designed to govern our own conduct and make sure that as we develop and deploy AI systems, we would do so in a way that reflects privacy and security, safety and reliability, fairness, inclusiveness, ultimately transparency and accountability. But equally, we’re just one company in one sector and we don’t make the rules. And so while we have our own approach to what I might describe as self-regulation, ultimately it’s up to governments to make the rules and for companies like ours to follow them. So in light of the most recent, I guess, dynamics we’re seeing geopolitically, we’ve taken some additional steps as well that perhaps get to some of the questions that Anka I think so effectively articulated. And this is a global audience in the room and a global audience, to be sure, online. But since we’re here in Europe, I would just emphasize, as a reference point, that we just about a month and a half ago articulated a set of European digital commitments that are very much designed, again, to get to the issues that I think are top of mind. One element of that really is making sure that we have a cloud and AI ecosystem that is broad and diverse. And we have our own infrastructure investments we make across Europe. Obviously, quite a lot of investment in the Netherlands, for sure, but elsewhere across Europe. In fact, we’ve committed to increased capacity over the next couple of years in Europe by 40%, which means that over a five-year period between 2023 and 2027, we’ll have double our capacity in infrastructure for hyperscale cloud services and AI services across Europe, with over 200 data centers in 16 different countries. These are investments that, for us, reflect an appreciation of the interdependence of what we are doing together with our European partners and allies. These are data centers that are not built on wheels. They are governed by the laws of the European countries in which we operate, just as the infrastructure we have elsewhere in the world is governed by local laws, and we’re not at all confused about that. Another element we’ve focused on with our new digital European commitments is this importance of maintaining digital resiliency, I’ll say, in this era of geopolitical volatility. And so, for instance, we’ve committed in this context to push back, including with litigation, in the event that there might be any order from any government anywhere in the world for us to seize or suspend cloud services, and that’s something that we think is especially important in today’s current environment. A third element we have in our European digital commitments is really focused on making sure we’re doing everything we can do to continue protecting and defending the Privacy of European Data, and in that connection we actually have invested over some years now in an EU data boundary for the Microsoft Cloud that allows us to commit to our customers that their data is stored and processed within the European Union and after countries as well. And we’ve taken also additional steps, too, in relation to sovereign controls to make sure that we are being responsive to the sorts of things we’re hearing from customers, from partners and from government stakeholders as well. So we’re doing various things in this area that we hope are going to be responsive to the concerns that are out there, but I’d also maybe just conclude by echoing the comment that Anke made, that it is natural and appropriate to focus on the challenges and the problems, but I think also the opportunity is so immense in this era of AI, and I hope in this conversation we can talk a bit more about that as well, because it’s pretty exciting what is happening.
Jenna Fung: Awesome. Thank you so much, Jeff. So far right now we can hear some key words coming up in the context of thinking, you know, the current concentrations of cloud computing markets. We can hear that there’s legitimate concerns around control of data, but at the same time around, you know, trust in this kind of services and the service provider as well. Before I turn it over to Agostina to add a little bit, I also want to bring up the second question because of Agostina’s background in that, and we can slowly ease into that part as well, because we talk a lot about dependencies on this dominant cloud providers, and you know, based on the topics around trust and legitimate concerns around controls of sensitive data, I wonder if Agostina can shed some light on how the situations are like in Global South, and you know, there’s a, you know, it’s probable that some other regions are using different kinds of service providers, not necessarily just U.S. based, but perhaps like Agostina can speak to more about the situations. Thank you. My transition right here is, earlier Jeff mentioned about transparency in accountabilities and that is also something that comes up a lot when different stakeholders bring up this very topic. And so leading us to question two, if we have to explore some shared approach that government, private sectors, civil society to manage strategic and regulatory and operational risk like the ones, some of the ones that we just mentioned and to whole service provider now very prominent private cloud services like the US-based companies but there’s other chances that you know different kinds of organizations or initiatives will emerge. Well this kind of like shared approach that we would want to take so that we are not losing the benefit of this kind of cloud service providing us because you know with the structure we have nowadays we are enjoying the cloud efficiency essentially and also you know scaling globally this is also something we’re enjoying but if we want to hold them accountable what are some shared approaches that different stakeholder could take. So now I would like to turn it over to Agustina and I’m so sorry if I make it overly complicated by you know putting the two questions together but I hope that you can give us some context from the perspective of Global South that will be very interesting to our audience today. Agustina over to you.
Agustina Brizio: Yeah thank you so much Jenna I will try to comprise everything. I’ll start with some key points that basically are some common threats. Some were already covered both by Anke and by Jeff, and I think they just light up the main discussions we have towards service providers. And when we think about cloud services in general, there’s a global concern about what’s happening with this key infrastructure upon we are basing our digital ecosystem and how we have a few dominant providers, maybe US-based companies that are in some point basically defining and shaping a little how digital landscape is going to be with some interventions from states regarding policy and some specific aspects. But we see, especially from the Global South, that many of this loss has a lot of challenges, mainly because of the transnational layer regarding both cloud and the Internet, and also about the power imbalances that usually these companies have when they face government. So in cloud services specifically, and since both Jeff and Anke talked about how relevant they are and how key they are to thinking the future of technology, this problematization is not about a matter of market shares or how it should be distributed among providers. It basically is how we are seeing who is taking decisions on the digital foundation of our societies. So here in Latin America, in general, we have a big dependency on these hyperscalers. We usually use mainly US providers, and that’s not only for the administration platforms that we use in governments, but it’s also where we host data, where a lot of our public policies are based on, and we cannot just take outside the picture the fact that many of the hyperscalers don’t even have a presence within the region. So it adds up a more complex layer on how to basically be able to regulate. or in some way exercise in an active way the concept of sovereignty translated into the digital world. And since many of these are basically relying on United States frameworks, because these companies are legally constituted there, this also poses a different layer of problematization into the democratic oversights that general society may execute through governments over these territories. So this, at least in Latin America, it’s not only like eroding our democratic construction, but it also diminishes the ability that governments have to respond into crisis, safeguard data or even to be able to design their own independent digital policies. So we’ve seen, especially in Europe, that several approaches have been led within a policy like GDPR, AI Act and the competences law market. There has been a lot of action from the state towards working in collaborations with these companies or trying to put like a digital rights framework within the development of these technologies. But still, when we are thinking it from the global south, there’s a really big imbalance and we have a massive regulatory gap towards how to basically work with this hyperscale because we want to have cloud services, we want to be able to develop better technological landscapes. But there hasn’t been like a concrete, there’s not like a tool that can be possibly used to basically reclaim some kind of tools regarding this so we can just foster progress. So I’m not going to get in depth in the problematic aspect. I think that probably a lot of our audience and people present their idea are quite aware what’s the problem with this big tech companies in general and how they affect not only markets, but also political sovereign decisions from governments. But I think that when we think about what to do with that, which is probably the thing we should all be trying to address more than the problems, there are some different actions that actually can be carried out in order to balance a little this power imbalance and this inequality situation. So that are, of course, really away from just technical fixes because it requires for governments and for every stakeholder actually to rethink the concept of sovereignty because when we usually talk about sovereign cloud initiatives or sovereign technologies, our mind instantly goes, okay, we need to have things within our territory developed by us and we have seen that this is not sustainable at a large scale. That requires too much investment and a lot of human talent that is not currently available, at least during the global subcountries. So we’re required to basically recognize and think as the folks on the cloud, as a strategic within the state. It’s not something that comes out and it’s just provided by a company for the government to be able to develop some services in a digital way. So we require from governments to think about different strategies in which you can address this ultra-concentrated market. For instance, and taking some of our experience in Argentina over the past years, there is always the solution to adopt a multi-cloud architecture in order to basically not have all the public services laid out on one provider generating a very critical situation. So in that way, when you think about different providers, not only the big ones, but also prioritizing local providers, you’re basically thinking about a more open standard guaranteeing portability and interoperability from a regulatory framework. At least this is a specific way in which basically as governments, we can rely on still using the cloud, still seizing all the opportunities and the power the cloud has in terms of innovation and of developing, but avoiding a little or not falling into the event or lock-in, which is probably the most critical situation, not in terms of a technological problem of interoperability. The problem with lock-in relies on the fact that the government is no longer able to take an autonomous decision once you have everything deployed in one provider, because we have a lot of problems with interoperability, without moving things, so by being able only to take some kind of decision regarding how services are provided, how data is stored. So I think keeping in mind this flexible structure helps a lot in order to be able to seize cloud services so we don’t have to sacrifice our performance while still retaining a little autonomy in terms of how to make decisions. And I think that to encompass this strategy of the multi-cloud architectures, there is a lot of investment required in both national and regional cloud capacities in this sense. So in Argentina, we did experiment a little with this. We have a telecoms provider, Arsat, it’s a company that basically provides internet services, satellite services, and it has a data center that is able to provide at least some kind of cloud services, and we wanted it to grow a little bit. So we tried with this approach. to basically foster innovation, target some specific investment within the company working with these big service cloud providers. So, our main goal with this kind of situation was to see that it is possible to think about different cloud models that not only are entirely public or entirely private that can actually have a blend and still maintain a little of public decision of frameworks that actually foster accountability, transparency and also in some way are thinking about the country’s long-term sovereignty because having a steady strategy that is rooted in some local actors also backs up into the educational ecosystem, the productive ecosystem, you are able to develop more targeted services to our local companies. So, it’s like a fuel to a general economy market into a productivity model, but this model is required basically to have a strong technical capacity, strong stable policy and basically a public mandate which was basically at least the thing we have a little more difficult within Latin America in general because we have a not very steady political landscape to encompass thinking about how we address tech in the long term as we are seeing that EU is more able to do so. And this is also very important because when we think about how we can shape cloud services or digital technologies in general, this is not something that can be achieved by the state itself. So, this requires to rethink how public and private sector interact among each other, how local industry, academia and civil society is incorporated in an effective governance framework, not only one that gathers multi-stakeholders in a table but then has only one or two stakeholders making the decision. So, that’s like the second key point of this, we need to rethink how we govern technology from a regional perspective, especially now that the geopolitical aspect of tech is being like so polarized and so critical. So, in this general context, we need to think a little more in a systemic way and being able to develop not only spaces to have different voices heard, we need to have like real enforcement mechanisms. And to finish on this about what actions can we actually take, I think our public procurement is amazing. level here. So the decisions are taking by government should take all this problems and this aspects into consideration. So we as governments we should be including in every public procurement contract things as data localization, as having over standards, as having a more transparent governance because those are the key aspects that governments actually can tweak a little in order to have some kind of impact in a landscape that it’s so big that they are not able to cover. So this is at least especially in our region, with the power imbalances we usually face, I think that these are like part of the key things that we can actually do that are doable and are achievable. And in that way, in an indirect way, at least help a little into targeting technical independence, because I think that the key thing with cloud services that are like, as the Internet was at one point, the other layer of every aspect of the digital ecosystem, we need to be able to permeate into that ecosystem, our values of justice, of inclusion, and now more than ever, of having a collective control of what’s happening there, because that’s why we kind of need to have solid infrastructures. But when we’re talking about infrastructures, it’s not just about thinking how efficient they are, how to make them more resilient and more secure, but we need to have a really social and democratic governance mechanism towards these infrastructures, because they are in some way becoming like a public good. So I think that this is at least one of the key issues on how we understand the cloud, what’s the layers and what’s the names we’re going to put on them to see what are the most effective policies and how to get basically all stakeholders engaged. Otherwise, this is always a discussion that seems to be reliant to engineers and to technicals, because the cloud is like that infrastructure that only concerns the technical community, whilst as we’re thinking, as you were saying, Jenna, with AI and everything, cloud is being like the spine of all the digital ecosystems. So I think that’s why we need to have a more socio-technical approach even to this, to start thinking how we want to govern it.
Jenna Fung: Awesome. Agostina, this is perfect timing, because I was about to stop you as well, because you bring up some really key, important topics into the conversation. because you bring up the conversation about democracy as well. But coming back to the original questions that we start with, I think some of the really key principle as we try to think of some effective policies to deal with the situations or share approach to counter our dependency on this very concentrated cloud services that we’re relying on these days is some of the things countries been doing in Latin America, for instance, is to, for instance, diversify their ecosystems and interoperability is also something that’s important. Perhaps the audience also have their own view, depends on what you subscribe to. And I would love to hear what you think it’s important for us to be a shared approach as you bring back the insights you get from this panel today, back to your own role, whether you are a policymaker or just someone working in NGO or whether you’re a student or not, because this is like a conversation where we need to have more stakeholder, but not so much only the government having control or the companies who are providing the services. So I guess like that’s one thing later on as we open the queue, we can continue and further the conversation right there. But one thing I really want to, for us to continue and dive deeper is about the part about more diverse and perhaps secure as well, as well as local, locally accountable cloud ecosystem as well. Because based on some examples that Agustina shared with us, looks like there’s like, these are some of the not only proposal. but some of the actions done by some countries or community out there. And so I wonder, as we are moving on with our conversations, what are some of the mix of perhaps market-driven innovations, regulatory oversights and potential public interests will be most effective in supporting this kind of development make the cloud ecosystem more diverse, secure and locally accountable. Of course, we have people from private sectors and government in person with us and we might have very different view and perhaps we can merge a little bit with our last question as well because now with the development of a conversation right here, I think our speaker right here also took some notes which we can kind of consolidate so we can open the queue sooner and answer those questions too. Of course, I want to explore, what’s the mix of different strategies like market-driven innovations, regulatory oversights and potential public investment could be effective in supporting the development of more diverse, secure and locally accountable cloud ecosystem. And what the role should governments, industry or different stakeholders in this room play in fostering domestic cloud innovation through whichever methodologies. I don’t want to make suggestions here to limit your thoughts. And so maybe I’ll pass it over to you, Anke and then later on to Jeff and see what’s insight you guys can put it out here before we go into Q&A.
Anke Sikkema: Yes, thank you very much. And I think that those are a lot of questions at the same time. I think we could talk about this for days maybe. So, but I’m trying to keep it short because I think it’s also important to hear more from you. or for the people to have some time for Q&A in this hour as well. When we talk about the role of governments, what we saw and what are three key words that I would like to express here, and those are protect, promote and partnership. So these three words are also at the core of the DOZA agenda I talked about before. Protect is about the legislation and it’s not only about protecting users but also to protect market parties, to create a level playing field. To promote is to stimulate innovation and the entry of new providers and to create scale and it’s only possible by partnership. As Agustina also said, it’s not only governments of course, it’s a partnership of businesses, governments and also academia who can work together. This cooperation I think is very important and in Europe we have this example of Gaia X where this was this consortium of 100 companies who work together. What we also see now is a new initiative which is called the Eurostack. Maybe some of you have heard of it. It’s an idea for a European industrial policy initiative bringing together tech, governance and funding for Europe-focused investments to build and adopt a suite of digital infrastructure. So it’s on all layers of the stack, from connectivity to cloud computing, AI and digital platforms. I think it’s good to be realistic in what is feasible, but it’s an interesting idea. What we see is that the concerns we had in the Dutch cloud market are not unique. are in more member states of the European Union, so it’s important to work together. And I think maybe to address the point that Jeff is making, and maybe it’s over to you then, I think it’s also good to look at the chances, not only at the concerns, but to look at all the possibilities there are in the digital sector and what it brings to the economy, but what it brings to society as a whole as well. So let’s find a balance in this discussion to look at both sides and to make them reinforce each other. Thank you.
Jenna Fung: Jeff, do you have any response?
Jeff Bullwinkel: Happy to pick up off Anke’s comment and also maybe respond to some of the things that Agustina said in her helpful intervention. One way to focus on the positive in relation to what can be achieved through hyperscale cloud services, say from Microsoft, in relation to important things around cybersecurity, for one thing, as well as innovation, I think for another. And thinking about cybersecurity, which is indeed top of mind or needs to be for all of us, we do have the ability to invest at scale, excuse me, in a way that does exceed what not just other companies might do, but even governments in some cases. And so, for instance, every day we have the ability to aggregate about 77 trillion different signals from our cloud services in a way that allows us to understand how the threat environment is evolving and therefore guard against cyber attacks and threats even before they eventuate. And that’s something that we want to make sure we do in partnership with government stakeholders as well through sharing threat intelligence and that sort of thing. I’d also maybe pick up on the question in relation to data residency as it relates to sovereignty and the sense people have of needing the right level of control. Ukraine’s digital infrastructure as well as critical infrastructure controlled by private companies that Microsoft detected and it was allowed to, working with Ukrainians and with President Zelensky’s office directly, to guard against effectively. At the time of this cyber attack and the kinetic attack, Ukraine had on its books a law that required government data to be stored within the borders of Ukraine. They suspended that law and that allowed Microsoft and some other companies to actually migrate their data from Ukraine across our own infrastructure in the European Union, paradoxically perhaps giving them data sovereignty by dispersing their digital assets across Europe. And in fact, it was not surprising that among the first buildings to be hit by Russian missiles and tanks were Ukrainian government data centers. And so it’s worth thinking about that in relation to this broader topic of sovereignty or perhaps I would say. Equally, I think it is helpful to think about this new economy that’s developing around AI, this era of AI, because indeed there is this new technology stack that has developed and there are three fundamental layers to the stack. One is infrastructure. The second layer is around models, foundation models themselves, and the third is around applications, and ultimately of course it goes to end users. I do think this conversation today and maybe more generally often tends to focus perhaps undue attention on the infrastructure layer at the expense of everything else. Of course, it’s critical. The infrastructure is absolutely critical, naturally. As I mentioned, we as a company have built immense infrastructure across Europe, around the world, precisely to make sure that we can be now, as we always have been, an open platform company on which others can innovate and grow. It may well be the case that governments across Europe, around the world, Latin America perhaps, decide to invest public resources in their own infrastructure, and that’s, of course, their prerogative. We might have a point of view about whether that’s the right use of resources given what’s already built, but we don’t have a vote. That’s quite clear. But I think if you focus so wholly on the infrastructure layer, you overlook the innovation that’s happening at the model layer and the application layer. And that’s what really is so exciting today, because there’s so much that is happening around the creation of foundation models, large or small, that can run on hyperscale cloud services infrastructure, including that provided by Microsoft. And here we are in Europe, and even taking just two examples of French companies, Mistral AI, that people may have heard about, Hugging Face, are two French champions that actually operate not just on Microsoft infrastructure, but others as well, but doing immensely exciting things at the model layer for the benefit of communities across Europe, across France, across Europe, around the world. And ultimately, of course, applications also are proliferating at immense speed, because the innovation opportunity for people, individual entrepreneurs, small companies, large enterprises, is absolutely immense. And so people shouldn’t overlook how much can be done, is being done, including for the benefit of communities across the global north and the global south. And in fact, I had the benefit, even over the past year, spending some time in Africa, I was visiting Kenya and Egypt and Nigeria, Tanzania, Rwanda, and in All of these different markets have the ability to meet with some people who are doing amazing things at the foundation model layer and at the application layer, so we shouldn’t lose sight of that. The last thing I would say, of course, is that it’s incumbent on a company like Microsoft to make sure that as we have this infrastructure we’ve built, it needs to be open and accessible. Indeed, one of our core European digital commitments builds upon an announcement we made about a year and a half ago around our AI access principles. It’s really all about making sure that as a company, again, we are providing open access to those who want to use infrastructure in a way that benefits people more broadly.
Jenna Fung: Thanks, Jeff. It’s interesting that I’ve been hearing all the remarks from all our speakers and I realize how the recent geopolitical atmosphere brings us back to the very conversations and discuss about how we deal with infrastructure. The internet today is very different from what the tech people imagined the internet could have been decades ago. I was not born yet, so I don’t know what exactly it is. We only have four minutes left, so it’s really the time for you guys to talk about what really matters to you. It’s time to speak for yourself and what makes sense for any kinds of stakeholders, government, private sectors, or yourself, civil society, to do because this is the critical moment where exactly we bring us back to the conversation around market concentrations. We talked about infrastructure exactly. How do you guys see it? Do we have any online questions? We have one onsite. Amazing.
Corinne Katt: Test, test. Yeah, you guys can hear me? Sorry. It’s so confusing when you can’t hear yourself. So thank you so much for the wide-ranging conversation. My name is Corinne Katt. I am the head of Team Digital at Human Rights NGO, Article 19, and also a recovering postdoc who wrote their work on the political economy of cloud. The question that I had, especially for Jeff Bullwinkle, was around the notion of the sovereign cloud, which I know has come up quite often, and I’m in that sense quite tied into the debates in the Netherlands, which is where I’m originally from. I was part of the, I guess, the group of people who vocally pushed back and saw some real dangers with, especially the Dutch government, moving to a cloud that we don’t fully control. And I was wondering if you could give your assessment of where the debate stands now, as I’ve obviously read European commitments, but it’s still unclear to me how that would preclude Microsoft from being beholden to the Cloud Act, the US Cloud Act. And I was wondering what you could say and what further information you could give about that. Thank you.
Jeff Bullwinkel: Well, thanks for the question. And again, I think this concern about sovereignty autonomy, as I mentioned, really is not new, but it is pronounced, and it’s coming up in a more, I think, focused way, given the somewhat volatile geopolitics of it. We have, for a long time, been focused on trying to build a public cloud that is sovereign by design, effectively. And we actually enhanced that recently with an announcement that was made just last week. In fact, in your home country, your country of origin, when our CEO Satya Nadella was in Amsterdam for a major event, and he gave a talk in which he announced a new approach to sovereign cloud in Europe in the context of these broader European digital commitments that I mentioned earlier. And he described, excuse me, he described a couple of different approaches. One is sovereign public cloud, which has various elements of control to it. And secondly, a sovereign private cloud, which is designed really for customers. And perhaps it could be the Dutch government as one such customer that has a particular set of needs that are very specialized, where you want to have absolute autarky, autonomy, disconnectedness, you know, separate and apart from the global internet. That’s certainly something we can provide as well and have announced as part of this broader effort around sovereignty. So we understand the nature of the challenge, I would say. In this announcement also that Schaake made, he emphasized as well the critical role that our European partner companies play, including in the Netherlands, but also across Europe. And we do, in fact, do a lot of work already with various European companies in the context of our broader cloud services with a focus on value-add services they can provide that often do focus on sovereignty considerations. You know, one such company is Leonardo in Italy, Proximus in Belgium, Telefonica in Spain. There are various companies out there we work with quite closely and will work with more with a focus on sovereignty. In connection with the Cloud Act, and this really gets to this question of U.S. government access to data, as I say, this is nothing new. It goes back to Edward Snowden, you know, 12, by now 12 or so years ago. There are lots of things that we have done as a company to make sure that we guard against the risk of intrusive access to data that really is our, it belongs to our European customers first and foremost. And so Microsoft’s view is, now always has been, that first of all, it is our customers’ data, not our data. And so as and when we have a request for access, we actually committed some years ago already to defend against that access request for data up to and including litigation with the U.S. government, even in the Supreme Court. And we actually have a strong track record of doing just that. And the last point I guess I would make as well is that in the context of cloud services today, you know, most European companies in this space also themselves have global aspirations. And therefore also, much like Microsoft or another U.S. headquarter company, will be susceptible. to a jurisdiction in the same sort of way. And so I think the question really for all of us is, what kinds of steps can a company with global aspirations take that will be effective? But make no mistake, we are very mindful of the fact that, as a company, we are investing in Europe, for Europe, with our European customers and partners and government stakeholders in mind, and in a way that will protect their data against improper access. That, by the way, goes also for services around the world, I would say. Awesome.
Jenna Fung: Since we are over time, I will intentionally not allow our speaker to do any remarks, because on this very topic, digital autonomy, you know, it is a topic that we should ask to every single one of you who are in there, because you can see, you know, today on this panel, we have prominent voices from the government, from the private sectors, but at the end of the day, it’s also related to the people, and each and every one of us are the one who should answer those questions. And on this very topic, it’s hard for us to leave the term sovereignty, and it is a slippery slope sometimes. So how should we approach it? What makes sense to you? And what should we do? Perhaps that’s something and a question that you can bring home and continue the conversation elsewhere. I think that concludes our conversation here today. Thank you so much for being with us. Thank you. . . . . . .
Anke Sikkema
Speech speed
128 words per minute
Speech length
1105 words
Speech time
517 seconds
Strategic dependencies on US-based cloud providers pose risks to national security and data control
Explanation
The Netherlands government identified that heavy reliance on major US cloud providers creates vulnerabilities in maintaining control and access to sensitive government data. This dependency raises concerns about national security and the ability to protect critical information.
Evidence
Research agencies in the Netherlands indicated that European players have relatively weak market positions in cloud services. The SIDN (.nl domain provider) case sparked national debate when it announced moving to Amazon Web Services, leading to ministerial investigation.
Major discussion point
Cloud Market Concentration and Digital Autonomy Concerns
Topics
Infrastructure | Legal and regulatory | Cybersecurity
Agreed with
– Jeff Bullwinkel
– Agustina Brizio
Agreed on
Concerns about cloud market concentration and digital sovereignty are legitimate and natural
Netherlands developed DOSA (Digital Open Strategic Autonomy) agenda to address cloud dependencies while remaining open globally
Explanation
The Dutch government created a strategic framework that aims to be ‘open to the outside world where possible and protective when necessary.’ This approach seeks to balance global openness with protection against strategic dependencies in digital technologies.
Evidence
The DOSA agenda was published in 2023, with cloud computing as one of the priority areas. The agenda deliberately includes ‘open’ to emphasize acting globally while addressing strategic dependencies.
Major discussion point
Government Policy Responses and Regulatory Approaches
Topics
Legal and regulatory | Infrastructure | Economic
EU legislative measures like Data Act and Digital Markets Act help mitigate risks while promoting investment
Explanation
European Union has taken comprehensive legislative steps to protect against cloud-related risks while simultaneously promoting and investing in cloud and data infrastructure projects. This dual approach addresses both regulatory protection and market development.
Evidence
Specific mention of European Data Act, Data Governance Act, and Digital Markets Act as important legislative steps. EU also started projects to promote and invest in cloud and data infrastructure.
Major discussion point
Government Policy Responses and Regulatory Approaches
Topics
Legal and regulatory | Economic | Infrastructure
Cloud computing provides essential efficiency and user-friendly services that benefit economy and society
Explanation
Despite concerns about dependencies, cloud services offer significant advantages including efficient data storage and user-friendly interfaces that have become integral to modern digital economy. The benefits cannot be ignored when addressing concerns.
Evidence
Emphasized that cloud is an efficient way to store data, is user-friendly, and ‘we can’t think of a world without it.’
Major discussion point
Innovation and Economic Opportunities
Topics
Economic | Infrastructure | Development
Agreed with
– Jeff Bullwinkel
Agreed on
Cloud computing provides essential benefits despite legitimate concerns
Government role involves three key elements: protect through legislation, promote innovation, and foster partnerships
Explanation
Governments should focus on creating protective legislation for users and market parties, stimulating innovation and new provider entry, and building partnerships across sectors. This multi-faceted approach requires collaboration between businesses, governments, and academia.
Evidence
Referenced European initiatives like Gaia X (consortium of 100 companies) and Eurostack (European industrial policy initiative for digital infrastructure across connectivity, cloud computing, AI and digital platforms).
Major discussion point
Multi-stakeholder Governance and Collaboration
Topics
Legal and regulatory | Economic | Infrastructure
Agreed with
– Jeff Bullwinkel
– Agustina Brizio
– Jenna Fung
Agreed on
Multi-stakeholder collaboration is essential for effective cloud governance
European initiatives like Gaia X and Eurostack aim to build collaborative digital infrastructure
Explanation
Europe is developing consortium-based approaches to create independent digital infrastructure capabilities. These initiatives bring together multiple stakeholders to build European-focused digital infrastructure across various technology layers.
Evidence
Gaia X involved 100 companies working together. Eurostack is described as bringing together tech, governance and funding for Europe-focused investments across connectivity, cloud computing, AI and digital platforms.
Major discussion point
Regional and Local Cloud Development
Topics
Infrastructure | Economic | Legal and regulatory
Disagreed with
– Jeff Bullwinkel
– Agustina Brizio
Disagreed on
Infrastructure focus vs. innovation layer emphasis
Jeff Bullwinkel
Speech speed
174 words per minute
Speech length
2680 words
Speech time
923 seconds
Market concentration among few major providers creates legitimate concerns about digital sovereignty
Explanation
Concerns about data privacy, security, and digital sovereignty are natural, understandable, and appropriate given the current market structure. These concerns have historical roots but are more pronounced in today’s volatile geopolitical environment.
Evidence
Referenced Edward Snowden case from over 10 years ago as an example of when trust in technology became an issue, noting that concerns around sovereignty and autonomy are ‘not perhaps new’ but ‘more pronounced and more frequent.’
Major discussion point
Cloud Market Concentration and Digital Autonomy Concerns
Topics
Legal and regulatory | Cybersecurity | Human rights
Agreed with
– Anke Sikkema
– Agustina Brizio
Agreed on
Concerns about cloud market concentration and digital sovereignty are legitimate and natural
Trust is fundamental – people won’t use technology they don’t trust, and trust arrives on foot but leaves on horseback
Explanation
Trust is the critical element in technology adoption, and companies must work hard to maintain it. The Dutch saying illustrates how trust is built slowly but can be lost quickly, which guides Microsoft’s approach to addressing sovereignty concerns.
Evidence
Quoted 18th century Dutch statesman saying ‘trust arrives on foot and leaves on horseback.’ Emphasized that ‘people will simply not use technology they don’t trust, and that guides everything that we do.’
Major discussion point
Trust and Security in Cloud Services
Topics
Human rights | Legal and regulatory | Cybersecurity
Hyperscale providers can invest in cybersecurity at levels exceeding individual governments, processing 77 trillion signals daily
Explanation
Large cloud providers have the ability to invest at scale in cybersecurity measures that exceed what individual companies or even governments might achieve. This scale allows for comprehensive threat detection and prevention capabilities.
Evidence
Microsoft aggregates 77 trillion different signals daily from cloud services to understand threat environment evolution and guard against cyber attacks before they occur. Also shares threat intelligence with government stakeholders.
Major discussion point
Trust and Security in Cloud Services
Topics
Cybersecurity | Infrastructure | Legal and regulatory
Disagreed with
– Agustina Brizio
Disagreed on
Role of hyperscale cloud providers in cybersecurity vs. sovereignty concerns
Data sovereignty can sometimes be better achieved through distributed infrastructure rather than local storage
Explanation
The Ukraine example demonstrates that data sovereignty may be better protected through geographic distribution rather than local storage requirements. Dispersing digital assets across secure infrastructure can provide better protection than keeping data within national borders.
Evidence
Ukraine suspended its law requiring government data storage within borders during Russian invasion, allowing Microsoft to migrate Ukrainian government data across EU infrastructure. Russian missiles targeted Ukrainian government data centers among first buildings hit.
Major discussion point
Trust and Security in Cloud Services
Topics
Cybersecurity | Legal and regulatory | Infrastructure
Disagreed with
– Agustina Brizio
Disagreed on
Distributed vs. localized data storage for sovereignty
AI era creates immense opportunities across infrastructure, model, and application layers
Explanation
The new AI economy consists of three fundamental layers – infrastructure, foundation models, and applications – each offering significant innovation opportunities. This technology stack enables widespread innovation and economic development.
Evidence
Described the three-layer AI technology stack and emphasized the ‘immense innovation opportunity for people, individual entrepreneurs, small companies, large enterprises.’
Major discussion point
Innovation and Economic Opportunities
Topics
Economic | Infrastructure | Development
Agreed with
– Anke Sikkema
Agreed on
Cloud computing provides essential benefits despite legitimate concerns
Focus shouldn’t be solely on infrastructure layer but also on innovation happening at model and application levels
Explanation
While infrastructure is critical, excessive focus on this layer overlooks significant innovation occurring in foundation models and applications. The real excitement and opportunity lies in what’s being built on top of the infrastructure.
Evidence
Noted that conversations ‘often tends to focus perhaps undue attention on the infrastructure layer at the expense of everything else’ while ‘innovation that’s happening at the model layer and the application layer’ is ‘what really is so exciting today.’
Major discussion point
Innovation and Economic Opportunities
Topics
Economic | Infrastructure | Development
Disagreed with
– Anke Sikkema
– Agustina Brizio
Disagreed on
Infrastructure focus vs. innovation layer emphasis
European companies like Mistral AI and Hugging Face demonstrate successful innovation on hyperscale infrastructure
Explanation
French companies are successfully creating foundation models and AI innovations using hyperscale cloud infrastructure, showing that European innovation can thrive on global platforms. This demonstrates the potential for local innovation on global infrastructure.
Evidence
Specifically mentioned Mistral AI and Hugging Face as ‘two French champions that actually operate not just on Microsoft infrastructure, but others as well, but doing immensely exciting things at the model layer for the benefit of communities across Europe, across France, across Europe, around the world.’
Major discussion point
Innovation and Economic Opportunities
Topics
Economic | Infrastructure | Development
Companies must follow government-made rules while maintaining self-regulation through responsible AI principles
Explanation
While companies can establish their own responsible AI principles for self-regulation, ultimately governments make the rules that companies must follow. Microsoft established responsible AI principles eight years ago but recognizes government authority in regulation.
Evidence
Microsoft articulated responsible AI principles eight years ago covering privacy, security, safety, reliability, fairness, inclusiveness, transparency and accountability. Emphasized ‘we’re just one company in one sector and we don’t make the rules’ and ‘ultimately it’s up to governments to make the rules and for companies like ours to follow them.’
Major discussion point
Multi-stakeholder Governance and Collaboration
Topics
Legal and regulatory | Human rights | Economic
Agreed with
– Anke Sikkema
– Agustina Brizio
– Jenna Fung
Agreed on
Multi-stakeholder collaboration is essential for effective cloud governance
Microsoft’s European commitments include doubling infrastructure capacity and establishing EU data boundaries
Explanation
Microsoft has made specific commitments to increase European infrastructure capacity by 40% in coming years and established EU data boundary to ensure European customer data stays within EU. These investments demonstrate commitment to European digital sovereignty concerns.
Evidence
Committed to doubling capacity between 2023-2027 with over 200 data centers in 16 European countries. Established EU data boundary for Microsoft Cloud allowing commitment that customer data is stored and processed within European Union. Also committed to push back against any government orders to seize or suspend cloud services.
Major discussion point
Regional and Local Cloud Development
Topics
Infrastructure | Legal and regulatory | Human rights
Agustina Brizio
Speech speed
140 words per minute
Speech length
1908 words
Speech time
813 seconds
Global South faces power imbalances and regulatory gaps when dealing with hyperscale cloud providers
Explanation
Latin American countries have significant dependency on US-based hyperscale cloud providers but lack the regulatory frameworks and power to effectively govern these relationships. Many hyperscalers don’t even have regional presence, making regulation more complex.
Evidence
Many hyperscalers don’t have presence within Latin American region, adding complexity to regulation. Countries rely on US frameworks because companies are legally constituted there, creating challenges for democratic oversight.
Major discussion point
Cloud Market Concentration and Digital Autonomy Concerns
Topics
Legal and regulatory | Development | Economic
Agreed with
– Anke Sikkema
– Jeff Bullwinkel
Agreed on
Concerns about cloud market concentration and digital sovereignty are legitimate and natural
Disagreed with
– Jeff Bullwinkel
Disagreed on
Role of hyperscale cloud providers in cybersecurity vs. sovereignty concerns
Cloud dependency affects democratic oversight and government ability to respond to crises
Explanation
Heavy reliance on foreign cloud providers erodes democratic governance structures and reduces government capacity to respond effectively during crises or to design independent digital policies. This dependency undermines national sovereignty in the digital realm.
Evidence
Described how dependency ‘erodes our democratic construction’ and ‘diminishes the ability that governments have to respond into crisis, safeguard data or even to be able to design their own independent digital policies.’
Major discussion point
Cloud Market Concentration and Digital Autonomy Concerns
Topics
Legal and regulatory | Human rights | Cybersecurity
Public procurement policies should include data localization and transparency requirements
Explanation
Governments can use their purchasing power to influence cloud provider behavior by including specific requirements in procurement contracts. This approach allows governments to have some impact on a market landscape that is otherwise too large for them to control directly.
Evidence
Emphasized that ‘public procurement is amazing level here’ and governments should include ‘data localization, having over standards, having a more transparent governance’ in every public procurement contract as ‘key aspects that governments actually can tweak.’
Major discussion point
Government Policy Responses and Regulatory Approaches
Topics
Legal and regulatory | Economic | Infrastructure
Disagreed with
– Jeff Bullwinkel
Disagreed on
Distributed vs. localized data storage for sovereignty
Multi-cloud architecture prevents vendor lock-in and maintains government decision-making autonomy
Explanation
Using multiple cloud providers instead of relying on a single provider helps governments avoid vendor lock-in situations and maintain autonomous decision-making capabilities. This approach allows continued use of cloud benefits while retaining flexibility and control.
Evidence
Argentina adopted multi-cloud architecture approach, prioritizing local providers alongside big ones, guaranteeing portability and interoperability. Emphasized that vendor lock-in means ‘government is no longer able to take an autonomous decision’ due to interoperability problems.
Major discussion point
Government Policy Responses and Regulatory Approaches
Topics
Infrastructure | Legal and regulatory | Economic
Effective governance requires rethinking how public and private sectors interact with academia and civil society
Explanation
Addressing cloud governance challenges requires moving beyond traditional stakeholder consultation to create frameworks where multiple stakeholders have real decision-making power and enforcement mechanisms. Current approaches often gather stakeholders but limit actual decision-making to few actors.
Evidence
Emphasized need for ‘effective governance framework, not only one that gathers multi-stakeholders in a table but then has only one or two stakeholders making the decision’ and need for ‘real enforcement mechanisms.’
Major discussion point
Multi-stakeholder Governance and Collaboration
Topics
Legal and regulatory | Human rights | Economic
Agreed with
– Anke Sikkema
– Jeff Bullwinkel
– Jenna Fung
Agreed on
Multi-stakeholder collaboration is essential for effective cloud governance
Cloud infrastructure should be governed as public good with democratic oversight mechanisms
Explanation
Cloud services are becoming fundamental infrastructure similar to public utilities and should be governed with social and democratic mechanisms rather than purely technical or efficiency considerations. This requires treating cloud as public good with collective control mechanisms.
Evidence
Argued that cloud infrastructure is ‘becoming like a public good’ and emphasized need for ‘social and democratic governance mechanism towards these infrastructures’ rather than just focusing on efficiency, resilience and security.
Major discussion point
Multi-stakeholder Governance and Collaboration
Topics
Legal and regulatory | Human rights | Infrastructure
Investment in national and regional cloud capacities can support local innovation ecosystems
Explanation
Developing local cloud capabilities through public-private partnerships can foster broader economic development by supporting local companies, educational institutions, and creating more targeted services. This approach builds long-term sovereignty while maintaining cloud benefits.
Evidence
Argentina experimented with Arsat, a telecoms provider with data center capabilities, to provide cloud services. This approach ‘backs up into the educational ecosystem, the productive ecosystem, you are able to develop more targeted services to our local companies.’
Major discussion point
Regional and Local Cloud Development
Topics
Infrastructure | Economic | Development
Disagreed with
– Jeff Bullwinkel
– Anke Sikkema
Disagreed on
Infrastructure focus vs. innovation layer emphasis
Hybrid public-private cloud models can maintain sovereignty while leveraging global capabilities
Explanation
Countries can develop cloud models that blend public and private elements, maintaining public decision-making frameworks while benefiting from private sector capabilities. This approach requires strong technical capacity and stable policy frameworks.
Evidence
Argentina’s experience with Arsat demonstrated ‘different cloud models that not only are entirely public or entirely private that can actually have a blend and still maintain a little of public decision’ while fostering accountability and transparency.
Major discussion point
Regional and Local Cloud Development
Topics
Infrastructure | Economic | Legal and regulatory
Corinne Katt
Speech speed
151 words per minute
Speech length
200 words
Speech time
79 seconds
US Cloud Act creates concerns about government access to European data despite company commitments to resist
Explanation
Despite Microsoft’s European commitments and promises to resist government data access requests, the US Cloud Act still creates legal obligations that could compromise European data sovereignty. The speaker questions how Microsoft’s commitments can fully address this fundamental legal framework issue.
Evidence
Identified as ‘head of Team Digital at Human Rights NGO, Article 19’ and ‘recovering postdoc who wrote their work on the political economy of cloud.’ Specifically questioned how European commitments would ‘preclude Microsoft from being beholden to the Cloud Act, the US Cloud Act.’
Major discussion point
Trust and Security in Cloud Services
Topics
Legal and regulatory | Human rights | Cybersecurity
Jenna Fung
Speech speed
130 words per minute
Speech length
2511 words
Speech time
1154 seconds
Cloud computing has become ubiquitous infrastructure that people encounter daily without realizing it
Explanation
Cloud services are now integrated into everyday activities from backing up photos to using smart home devices, Google Docs, and Zoom calls. This widespread adoption has made cloud computing the backbone of today’s digital economy, yet users often don’t recognize their dependency on these services.
Evidence
Examples provided include backing up photos from phones, using smart home devices, video doorbells, Google Docs, and Zoom calls for IGF workshop coordination.
Major discussion point
Cloud Market Concentration and Digital Autonomy Concerns
Topics
Infrastructure | Economic | Development
The concentration of cloud services among few major US-based providers raises concerns about strategic dependency
Explanation
Much of the world relies heavily on a handful of major cloud providers, primarily US-based companies like AWS, Microsoft Azure, and Google Cloud. While these companies have done amazing work, this concentration is causing regions like Europe to have concerns about strategic dependency, with Asia showing mixed approaches including some reliance on Chinese providers.
Evidence
Mentioned AWS, Microsoft Azure, and Google Cloud as dominant US-based providers. Noted Europe’s concerns about strategic dependency and Asia’s mixed picture with some countries using Alibaba or Tencent (Chinese companies). Only few countries have managed to build robust domestic cloud ecosystems.
Major discussion point
Cloud Market Concentration and Digital Autonomy Concerns
Topics
Infrastructure | Economic | Legal and regulatory
Digital autonomy represents the ultimate aspiration beyond data residency and data sovereignty
Explanation
The concept progresses from data residency (physical storage within borders) to data sovereignty (legal jurisdiction over data) to the broader goal of digital autonomy. This framework helps structure meaningful conversations about cloud governance that can be applied in different national contexts.
Evidence
Defined data residency as data stored/processed within national borders, data sovereignty as data subject to country’s laws and regulations, with digital autonomy as the ultimate aspiration in this domain.
Major discussion point
Government Policy Responses and Regulatory Approaches
Topics
Legal and regulatory | Infrastructure | Human rights
Key questions about cloud governance require multi-stakeholder input beyond just government and companies
Explanation
Important decisions about cloud infrastructure, digital autonomy, and governance shouldn’t be left solely to governments or service providers. These are questions that every stakeholder – including NGO workers, students, and citizens – should engage with and help answer.
Evidence
Emphasized that digital autonomy ‘is a topic that we should ask to every single one of you who are in there’ and noted the panel had ‘prominent voices from the government, from the private sectors, but at the end of the day, it’s also related to the people, and each and every one of us are the one who should answer those questions.’
Major discussion point
Multi-stakeholder Governance and Collaboration
Topics
Legal and regulatory | Human rights | Economic
Agreed with
– Anke Sikkema
– Jeff Bullwinkel
– Agustina Brizio
Agreed on
Multi-stakeholder collaboration is essential for effective cloud governance
The internet today differs significantly from original technical visions due to geopolitical influences
Explanation
Recent geopolitical tensions have brought back fundamental conversations about internet infrastructure governance. The current internet landscape has evolved differently from what technologists originally envisioned, influenced by political and economic factors rather than purely technical considerations.
Evidence
Noted that ‘recent geopolitical atmosphere brings us back to the very conversations and discuss about how we deal with infrastructure’ and ‘The internet today is very different from what the tech people imagined the internet could have been decades ago.’
Major discussion point
Cloud Market Concentration and Digital Autonomy Concerns
Topics
Infrastructure | Legal and regulatory | Economic
Agreements
Agreement points
Cloud computing provides essential benefits despite legitimate concerns
Speakers
– Anke Sikkema
– Jeff Bullwinkel
Arguments
Cloud computing provides essential efficiency and user-friendly services that benefit economy and society
AI era creates immense opportunities across infrastructure, model, and application layers
Summary
Both speakers acknowledge that while there are legitimate concerns about cloud concentration, the benefits of cloud computing for economy and society cannot be ignored. They emphasize the need to balance addressing concerns with recognizing opportunities.
Topics
Economic | Infrastructure | Development
Concerns about cloud market concentration and digital sovereignty are legitimate and natural
Speakers
– Anke Sikkema
– Jeff Bullwinkel
– Agustina Brizio
Arguments
Strategic dependencies on US-based cloud providers pose risks to national security and data control
Market concentration among few major providers creates legitimate concerns about digital sovereignty
Global South faces power imbalances and regulatory gaps when dealing with hyperscale cloud providers
Summary
All three main speakers agree that concerns about the concentration of cloud services among few providers are legitimate, natural, and appropriate, though they come from different regional perspectives.
Topics
Legal and regulatory | Cybersecurity | Infrastructure
Multi-stakeholder collaboration is essential for effective cloud governance
Speakers
– Anke Sikkema
– Jeff Bullwinkel
– Agustina Brizio
– Jenna Fung
Arguments
Government role involves three key elements: protect through legislation, promote innovation, and foster partnerships
Companies must follow government-made rules while maintaining self-regulation through responsible AI principles
Effective governance requires rethinking how public and private sectors interact with academia and civil society
Key questions about cloud governance require multi-stakeholder input beyond just government and companies
Summary
All speakers emphasize that addressing cloud governance challenges requires collaboration between governments, private sector, academia, and civil society, rather than any single stakeholder acting alone.
Topics
Legal and regulatory | Human rights | Economic
Similar viewpoints
Both speakers from government/policy backgrounds emphasize the importance of regulatory frameworks and government procurement policies as tools to address cloud dependency issues while maintaining benefits.
Speakers
– Anke Sikkema
– Agustina Brizio
Arguments
EU legislative measures like Data Act and Digital Markets Act help mitigate risks while promoting investment
Public procurement policies should include data localization and transparency requirements
Topics
Legal and regulatory | Economic | Infrastructure
Both speakers recognize that effective data sovereignty may require flexible approaches rather than strict data localization, though they approach this from different perspectives (private sector vs. government policy).
Speakers
– Jeff Bullwinkel
– Agustina Brizio
Arguments
Data sovereignty can sometimes be better achieved through distributed infrastructure rather than local storage
Multi-cloud architecture prevents vendor lock-in and maintains government decision-making autonomy
Topics
Infrastructure | Legal and regulatory | Cybersecurity
Both speakers support the development of regional and collaborative approaches to building cloud infrastructure capabilities, seeing this as a way to address dependency concerns while fostering innovation.
Speakers
– Anke Sikkema
– Agustina Brizio
Arguments
European initiatives like Gaia X and Eurostack aim to build collaborative digital infrastructure
Investment in national and regional cloud capacities can support local innovation ecosystems
Topics
Infrastructure | Economic | Development
Unexpected consensus
Flexibility over strict data localization requirements
Speakers
– Jeff Bullwinkel
– Agustina Brizio
Arguments
Data sovereignty can sometimes be better achieved through distributed infrastructure rather than local storage
Multi-cloud architecture prevents vendor lock-in and maintains government decision-making autonomy
Explanation
It’s unexpected that a major cloud provider (Microsoft) and a Global South policy advocate would agree that strict data localization isn’t always the best approach to sovereignty. Both recognize that flexibility and strategic distribution can be more effective than rigid local storage requirements.
Topics
Infrastructure | Legal and regulatory | Cybersecurity
Trust as fundamental challenge requiring ongoing attention
Speakers
– Jeff Bullwinkel
– Corinne Katt
Arguments
Trust is fundamental – people won’t use technology they don’t trust, and trust arrives on foot but leaves on horseback
US Cloud Act creates concerns about government access to European data despite company commitments to resist
Explanation
Despite being on different sides of the cloud sovereignty debate, both the Microsoft representative and the human rights advocate acknowledge that trust issues are fundamental and ongoing challenges that require continuous attention and cannot be easily resolved through technical solutions alone.
Topics
Legal and regulatory | Human rights | Cybersecurity
Overall assessment
Summary
The speakers showed remarkable consensus on the legitimacy of cloud sovereignty concerns, the need for multi-stakeholder collaboration, and the importance of balancing benefits with risks. They agreed that current market concentration creates real challenges while acknowledging cloud computing’s essential role in modern digital economy.
Consensus level
High level of consensus on problem identification and governance principles, with differences mainly in emphasis and proposed solutions rather than fundamental disagreements. This suggests a mature understanding of the issues across stakeholders and potential for collaborative policy development, though implementation details may still require negotiation.
Differences
Different viewpoints
Role of hyperscale cloud providers in cybersecurity vs. sovereignty concerns
Speakers
– Jeff Bullwinkel
– Agustina Brizio
Arguments
Hyperscale providers can invest in cybersecurity at levels exceeding individual governments, processing 77 trillion signals daily
Global South faces power imbalances and regulatory gaps when dealing with hyperscale cloud providers
Summary
Jeff emphasizes the security benefits and scale advantages of hyperscale providers, while Agustina focuses on the power imbalances and democratic oversight challenges they create, particularly for Global South countries
Topics
Cybersecurity | Legal and regulatory | Development
Infrastructure focus vs. innovation layer emphasis
Speakers
– Jeff Bullwinkel
– Anke Sikkema
– Agustina Brizio
Arguments
Focus shouldn’t be solely on infrastructure layer but also on innovation happening at model and application levels
European initiatives like Gaia X and Eurostack aim to build collaborative digital infrastructure
Investment in national and regional cloud capacities can support local innovation ecosystems
Summary
Jeff argues against excessive focus on infrastructure layer, preferring emphasis on model and application innovation, while Anke and Agustina advocate for building independent infrastructure capabilities as foundation for sovereignty
Topics
Infrastructure | Economic | Development
Distributed vs. localized data storage for sovereignty
Speakers
– Jeff Bullwinkel
– Agustina Brizio
Arguments
Data sovereignty can sometimes be better achieved through distributed infrastructure rather than local storage
Public procurement policies should include data localization and transparency requirements
Summary
Jeff uses Ukraine example to argue that distributed storage can provide better sovereignty protection, while Agustina advocates for data localization requirements as a tool for maintaining government control
Topics
Legal and regulatory | Cybersecurity | Infrastructure
Unexpected differences
Benefits vs. risks framing of cloud services
Speakers
– Jeff Bullwinkel
– Agustina Brizio
Arguments
AI era creates immense opportunities across infrastructure, model, and application layers
Cloud dependency affects democratic oversight and government ability to respond to crises
Explanation
Unexpected because both acknowledge cloud benefits, but Jeff consistently frames the discussion around opportunities and innovation potential, while Agustina emphasizes democratic erosion and crisis response limitations – representing fundamentally different risk-benefit calculations
Topics
Economic | Human rights | Legal and regulatory
Trust-building approaches
Speakers
– Jeff Bullwinkel
– Corinne Katt
Arguments
Trust is fundamental – people won’t use technology they don’t trust, and trust arrives on foot but leaves on horseback
US Cloud Act creates concerns about government access to European data despite company commitments to resist
Explanation
Unexpected because Jeff emphasizes trust-building through company commitments and technical measures, while Corinne questions whether these commitments can overcome fundamental legal framework issues like the US Cloud Act – suggesting structural vs. voluntary approaches to trust
Topics
Legal and regulatory | Human rights | Cybersecurity
Overall assessment
Summary
Main disagreements center on whether solutions should focus on company commitments and distributed infrastructure versus building independent capabilities and stronger regulatory frameworks, with particular tension between innovation opportunities and democratic governance concerns
Disagreement level
Moderate disagreement with significant implications – while speakers share concerns about market concentration, their different approaches (corporate self-regulation vs. government intervention vs. democratic governance) could lead to incompatible policy directions and highlight fundamental tensions between efficiency, innovation, and sovereignty in cloud governance
Partial agreements
Partial agreements
Similar viewpoints
Both speakers from government/policy backgrounds emphasize the importance of regulatory frameworks and government procurement policies as tools to address cloud dependency issues while maintaining benefits.
Speakers
– Anke Sikkema
– Agustina Brizio
Arguments
EU legislative measures like Data Act and Digital Markets Act help mitigate risks while promoting investment
Public procurement policies should include data localization and transparency requirements
Topics
Legal and regulatory | Economic | Infrastructure
Both speakers recognize that effective data sovereignty may require flexible approaches rather than strict data localization, though they approach this from different perspectives (private sector vs. government policy).
Speakers
– Jeff Bullwinkel
– Agustina Brizio
Arguments
Data sovereignty can sometimes be better achieved through distributed infrastructure rather than local storage
Multi-cloud architecture prevents vendor lock-in and maintains government decision-making autonomy
Topics
Infrastructure | Legal and regulatory | Cybersecurity
Both speakers support the development of regional and collaborative approaches to building cloud infrastructure capabilities, seeing this as a way to address dependency concerns while fostering innovation.
Speakers
– Anke Sikkema
– Agustina Brizio
Arguments
European initiatives like Gaia X and Eurostack aim to build collaborative digital infrastructure
Investment in national and regional cloud capacities can support local innovation ecosystems
Topics
Infrastructure | Economic | Development
Takeaways
Key takeaways
Cloud market concentration among few major providers (primarily US-based) creates legitimate concerns about digital autonomy, data sovereignty, and democratic oversight
Trust is fundamental to cloud adoption – users won’t adopt technology they don’t trust, and maintaining trust requires transparency and accountability from providers
A balanced approach is needed that captures cloud benefits (efficiency, innovation, cybersecurity) while addressing sovereignty concerns through diversification and regulatory frameworks
Multi-cloud architecture and avoiding vendor lock-in are essential strategies for maintaining government decision-making autonomy
Effective cloud governance requires multi-stakeholder collaboration between governments, private sector, academia, and civil society rather than single-actor solutions
Government roles should focus on three pillars: protect through legislation, promote innovation, and foster partnerships
Public procurement policies can be powerful tools for incorporating data localization, transparency, and sovereignty requirements
Innovation opportunities exist across all layers of the technology stack (infrastructure, models, applications), not just infrastructure
Regional cooperation and hybrid public-private models can provide alternatives to complete dependency on global hyperscalers
Resolutions and action items
Participants should bring the conversation about digital autonomy back to their own countries and contexts for further discussion
Governments should incorporate data localization, open standards, and transparency requirements into public procurement contracts
Investment in national and regional cloud capacities should be prioritized to support local innovation ecosystems
Multi-cloud architectures should be adopted to prevent vendor lock-in and maintain decision-making autonomy
Unresolved issues
How to effectively address US Cloud Act concerns regarding government access to European data despite company commitments
What specific mix of market-driven innovation, regulatory oversight, and public investment would be most effective for developing diverse cloud ecosystems
How to balance the benefits of hyperscale cloud services with sovereignty concerns in practice
How to achieve sustainable local cloud development given limited investment capacity and human talent in many regions
What constitutes the optimal definition and implementation of ‘digital sovereignty’ versus complete technological autarky
How to ensure democratic governance mechanisms for cloud infrastructure that increasingly functions as public goods
How smaller countries and Global South nations can effectively negotiate with powerful hyperscale providers given existing power imbalances
Suggested compromises
Adopt ‘open to the outside world where possible, protective when necessary’ approach as demonstrated by Netherlands’ DOSA agenda
Implement sovereign cloud solutions that offer different levels of control – from sovereign public cloud to completely disconnected sovereign private cloud
Develop hybrid public-private cloud models that leverage global capabilities while maintaining some local control and decision-making authority
Focus on creating open and interoperable standards rather than complete technological independence
Pursue regional cooperation initiatives (like European Gaia X and Eurostack) that pool resources and expertise while maintaining some autonomy from global providers
Use partnership approaches with hyperscale providers that include European companies providing value-added sovereignty-focused services
Thought provoking comments
Trust arrives on foot and leaves on horseback – a famous Dutch statesman quote that guides everything we do. People will simply not use technology they don’t trust.
Speaker
Jeff Bullwinkel
Reason
This metaphor powerfully encapsulates the central challenge in cloud computing discussions. It reframes the entire debate from technical specifications to fundamental human psychology and trust-building, acknowledging that all technological solutions are meaningless without user confidence.
Impact
This comment shifted the conversation from purely technical and regulatory concerns to the human element of technology adoption. It provided a philosophical foundation that influenced how subsequent speakers framed their arguments, with trust becoming a recurring theme throughout the discussion.
The problem with lock-in relies on the fact that the government is no longer able to take an autonomous decision… So we require from governments to think about different strategies in which you can address this ultra-concentrated market.
Speaker
Agustina Brizio
Reason
This comment redefines vendor lock-in not as a technical problem but as a democratic governance issue. It challenges the audience to think beyond efficiency metrics to consider how technological dependencies can erode governmental decision-making autonomy and democratic oversight.
Impact
This intervention fundamentally elevated the discussion from market competition concerns to democratic governance implications. It introduced the concept that cloud dependency isn’t just about economics or security, but about preserving democratic institutions’ ability to make autonomous decisions.
Ukraine suspended their law requiring government data to be stored within borders, and paradoxically achieved data sovereignty by dispersing their digital assets across Europe when Russian missiles targeted their data centers.
Speaker
Jeff Bullwinkel
Reason
This real-world example challenges conventional thinking about data sovereignty and physical location. It demonstrates how rigid interpretations of sovereignty can actually undermine security and autonomy, forcing a reconceptualization of what digital sovereignty means in practice.
Impact
This concrete example forced all participants to grapple with the complexity and potential contradictions in sovereignty concepts. It shifted the discussion from theoretical policy frameworks to practical realities, influencing how other speakers addressed the balance between ideological positions and pragmatic needs.
We need to have a more socio-technical approach… because cloud is being like the spine of all the digital ecosystems. This is always a discussion that seems to be relegated to engineers and technicals, whilst cloud is becoming the infrastructure that affects everyone.
Speaker
Agustina Brizio
Reason
This comment challenges the technical framing of cloud discussions and argues for democratizing the conversation. It recognizes that infrastructure decisions have profound social implications and shouldn’t be left solely to technical experts, calling for broader stakeholder engagement.
Impact
This observation broadened the scope of who should be involved in cloud governance discussions. It influenced the moderator’s closing remarks about how ‘each and every one of us’ should answer these questions, moving the conversation from expert-driven to citizen-inclusive governance models.
Open to the outside world where possible and protective when necessary – this is how we look at Digital Open Strategic Autonomy.
Speaker
Anke Sikkema
Reason
This formulation provides a nuanced middle path between technological nationalism and complete openness. It acknowledges that absolute positions are impractical while providing a framework for making contextual decisions about when to prioritize openness versus protection.
Impact
This balanced approach influenced how other speakers framed their arguments, moving away from binary thinking toward more nuanced policy positions. It provided a practical framework that other participants could reference when discussing their own regional approaches.
This conversation tends to focus perhaps undue attention on the infrastructure layer at the expense of everything else… you overlook the innovation that’s happening at the model layer and the application layer.
Speaker
Jeff Bullwinkel
Reason
This comment challenges the entire premise of focusing primarily on infrastructure ownership and control. It argues that innovation and value creation happen across multiple layers of the technology stack, potentially making infrastructure ownership less critical than commonly assumed.
Impact
This reframing attempted to redirect the conversation toward innovation opportunities rather than dependency concerns. While it didn’t fully shift the discussion’s focus, it introduced important complexity about where value and control actually reside in modern cloud ecosystems.
Overall assessment
These key comments collectively transformed what could have been a narrow technical discussion about cloud market concentration into a rich, multi-dimensional conversation about democracy, trust, sovereignty, and innovation. The most impactful interventions challenged binary thinking – moving beyond simple dichotomies of dependence vs. independence, local vs. global, or security vs. innovation. Instead, they introduced nuanced frameworks for thinking about these trade-offs contextually. The discussion evolved from identifying problems to exploring practical approaches that balance multiple competing values. The Ukrainian example and trust metaphor were particularly powerful in grounding abstract policy concepts in human realities, while the calls for socio-technical approaches and democratic participation broadened the conversation beyond technical experts to include broader societal stakeholders. Overall, these comments elevated the discussion from a policy workshop to a fundamental examination of how societies should govern critical digital infrastructure in an interconnected world.
Follow-up questions
How can the EU data boundary for Microsoft Cloud be further strengthened to ensure complete protection against US Cloud Act requirements?
Speaker
Corinne Katt
Explanation
This question addresses a critical gap in understanding how Microsoft’s European commitments actually protect against US government data access requests, which remains a key sovereignty concern for European governments and organizations.
What specific mechanisms can ensure effective multi-stakeholder governance frameworks beyond just gathering stakeholders at a table?
Speaker
Agustina Brizio
Explanation
This highlights the need for research into practical governance structures that give meaningful decision-making power to all stakeholders, not just token representation, which is crucial for democratic oversight of cloud infrastructure.
How can Global South countries develop sustainable technical capacity and stable policy frameworks for long-term cloud sovereignty strategies?
Speaker
Agustina Brizio
Explanation
This identifies a critical research area for understanding the specific challenges and solutions needed for developing countries to achieve digital autonomy without sacrificing technological benefits.
What are the most effective public procurement contract terms and enforcement mechanisms that governments can use to influence cloud provider behavior?
Speaker
Agustina Brizio
Explanation
This represents a practical area for further research into how governments can leverage their purchasing power to achieve sovereignty goals while maintaining service quality and innovation.
How can the balance between ‘open where possible, protective when necessary’ be operationalized in practice across different types of government data and services?
Speaker
Anke Sikkema
Explanation
This requires further research into developing clear frameworks and criteria for determining when protective measures are necessary versus when openness should prevail in cloud service decisions.
What lessons can be learned from the Ukrainian data migration case study for other countries’ data sovereignty strategies during crisis situations?
Speaker
Jeff Bullwinkel
Explanation
This case study raises important questions about how traditional data residency requirements may need to be reconsidered in light of physical security threats and crisis management needs.
How can innovation at the model and application layers be better supported and recognized in sovereignty discussions that tend to focus primarily on infrastructure?
Speaker
Jeff Bullwinkel
Explanation
This suggests need for research into more comprehensive approaches to digital sovereignty that consider the entire technology stack and innovation ecosystem, not just infrastructure ownership.
What specific steps should each stakeholder group (government, private sector, civil society) take to achieve meaningful digital autonomy?
Speaker
Jenna Fung
Explanation
This overarching question was posed to the audience as a call for continued discussion and research into practical actions that different stakeholder groups can take to address cloud concentration concerns.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.