WS #294 AI Sandboxes Responsible Innovation in Developing Countries
26 Jun 2025 09:00h - 10:15h
WS #294 AI Sandboxes Responsible Innovation in Developing Countries
Session at a glance
Summary
This workshop at the Internet Governance Forum focused on AI sandboxes as tools for regulatory experimentation and innovation governance. Sophie Tomlinson from the DataSphere Initiative moderated a diverse panel of experts from government, business, academia, and international organizations to explore how sandboxes can help assess and govern AI technologies across different sectors.
Mariana Rozo-Pan introduced sandboxes as collaborative spaces where stakeholders experiment with technologies against regulatory frameworks, drawing parallels to childhood play with building blocks. The DataSphere Initiative has mapped over 150 sandboxes globally, demonstrating their expansion from fintech origins to AI applications across developed and developing countries. Meni Anastasiadou from the International Chamber of Commerce emphasized how sandboxes support the four-pillar approach to AI governance, particularly benefiting small and medium enterprises by providing safe testing environments before market deployment.
Alex Moltzau from the European AI Office discussed the EU AI Act’s incorporation of regulatory sandboxes, highlighting ongoing work with member states to develop implementation frameworks and cross-border collaboration mechanisms. Speakers from Africa, including Jimson Olufuye and Maureen, shared insights about the continent’s growing interest in sandboxes, with Nigeria developing frameworks for data protection compliance and AI strategy implementation.
Key challenges identified include resource constraints, the need for clear legal frameworks, transparency in eligibility criteria, and meaningful stakeholder engagement including civil society participation. Natalie Cohen from the OECD emphasized that sandboxes are just one form of regulatory experimentation, requiring careful consideration of policy objectives and exit strategies. The discussion highlighted sandboxes’ potential to build trust between regulators, businesses, and civil society while providing evidence-based approaches to governing emerging AI technologies responsibly across borders and sectors.
Keypoints
## Major Discussion Points:
– **What are AI sandboxes and why are they needed**: The discussion established that sandboxes are collaborative, safe spaces where different stakeholders (public sector, private sector, civil society) can experiment with AI technologies against existing or developing regulatory frameworks. They originated in fintech but are now expanding globally across sectors like health, transportation, and data governance.
– **Implementation challenges and resource considerations**: Speakers highlighted significant barriers including funding constraints, resource intensity for regulators, need for clear governance structures, eligibility criteria, and exit strategies. The discussion emphasized that sandboxes require substantial overhead for both regulators and participating businesses, particularly affecting SME participation.
– **Global perspectives and cross-border potential**: The conversation covered sandbox initiatives across different regions – from the EU AI Act’s regulatory sandboxes to Africa’s emerging sandbox landscape (25 national sandboxes, mostly in finance) to Asia’s health sector applications. There was significant discussion about the potential for cross-border sandboxes to address interoperability and international collaboration.
– **Stakeholder inclusion and civil society participation**: Multiple speakers emphasized the need to meaningfully include civil society and individuals affected by AI systems throughout the sandbox process, not just businesses and regulators. This was identified as an area needing improvement in current sandbox frameworks.
– **Trust-building and evidence-based regulation**: The discussion positioned sandboxes as tools to address mistrust between stakeholders and build evidence-based regulatory approaches for AI governance, with only 41% of countries trusting governments to appropriately regulate new technologies according to OECD data.
## Overall Purpose:
The workshop aimed to explore how regulatory sandboxes can serve as effective tools for AI governance, bringing together diverse international perspectives to discuss practical implementation strategies, challenges, and opportunities for using sandboxes to responsibly develop and regulate AI technologies across different sectors and regions.
## Overall Tone:
The discussion maintained a consistently collaborative and constructive tone throughout. Speakers were enthusiastic about sandbox potential while being realistic about implementation challenges. The tone was professional yet accessible, with speakers building on each other’s points and acknowledging different regional perspectives. There was a sense of shared learning and knowledge exchange, with participants openly discussing both successes and obstacles in sandbox development. The atmosphere remained positive and forward-looking, focusing on solutions and best practices rather than dwelling on problems.
Speakers
**Speakers from the provided list:**
– **Sophie Tomlinson** – Director of Programs at the DataSphere Initiative
– **Mariana Rozo-Pan** – Research and Project Management Lead at the DataSphere Initiative
– **Meni Anastasiadou** – Digital Policy Manager at the International Chamber of Commerce
– **Alex Moltzau** – Policy Officer at the European AI Office
– **Jimson Olufuye** – Chairman of AFICTA (Africa ICT Alliance), Principal Consultant at Contemporary Consulting
– **Natalie Cohen** – Head of Regulatory Policy for Global Challenges at the OECD
– **Moraes Thiago** – PhD Researcher at VWB in Belgium, also works at Brazilian Data Protection Authority
– **Jai Ganesh Udayasankaran** – Executive Director at the Asia eHealth Information Network
– **Participant 1** – Africa Sandboxes Forum Lead at the DataSphere Initiative (identified as Maureen/Amoturine based on context)
– **Audience** – Multiple audience members including Giovanna (Brazil Youth Program facilitator) and others
**Additional speakers:**
– **Bertrand de la Chapelle** – Chief Vision Officer at the DataSphere Initiative
Full session report
# AI Sandboxes for Regulatory Experimentation: A Comprehensive Workshop Report
## Introduction and Context
This workshop at the Internet Governance Forum brought together international experts to explore AI sandboxes in regulatory experimentation. Moderated by Sophie Tomlinson, Director of Programmes at the DataSphere Initiative—described as “a think-do-tank working on data governance and sandboxes”—the session featured representatives from government agencies, international organisations, business associations, and academic institutions. The discussion began with interactive Mentimeter polling, engaging participants on their associations with sandboxes and sector priorities.
The conversation maintained a collaborative tone throughout, with speakers demonstrating enthusiasm about sandbox potential whilst remaining realistic about implementation challenges. Participants built upon each other’s contributions, creating knowledge exchange that reflected the global nature of AI governance challenges.
## Understanding AI Sandboxes: Definitions and Evolution
### Conceptual Framework
Mariana Rozo-Pan, Research and Project Management Lead at the DataSphere Initiative, opened with a compelling childhood metaphor: “We often forget how we used to play when we were kids. And as we were children growing up, we were actually quite excited about experimenting and about thinking about building things, building them, and then kind of destroying them and building something new again.”
This framing established sandboxes as collaborative, safe spaces where different stakeholders—public sector, private sector, and civil society—experiment with technologies against existing or developing regulatory frameworks. Rozo-Pan defined sandboxes as environments enabling stakeholders to “craft solutions, experiment with technologies” in structured yet flexible ways.
### Global Expansion
The DataSphere Initiative’s mapping revealed significant global expansion, with Rozo-Pan noting they had identified “over 66 sandboxes that now is around 150” worldwide. This represents evolution from origins in financial technology to encompass AI applications across diverse sectors including health, transportation, and data governance. The expansion spans both developed and developing countries, indicating widespread recognition of sandboxes as valuable regulatory tools.
## Business and Industry Perspectives
Meni Anastasiadou, Digital Policy Manager at the International Chamber of Commerce, provided the business community’s perspective. She positioned sandboxes within a broader approach to AI governance, emphasising their particular value for small and medium enterprises (SMEs) that may lack resources for extensive regulatory compliance testing.
Anastasiadou argued that sandboxes are “particularly beneficial for SMEs,” addressing a critical gap in the innovation ecosystem. She emphasised that AI governance frameworks need to be “harmonised, flexible, and supportive of innovation while reducing compliance complexities,” positioning sandboxes as tools that can achieve this balance.
## European Union Implementation Framework
Alex Moltzau, Policy Officer at the European AI Office, provided detailed insights into the EU’s approach to incorporating regulatory sandboxes within the AI Act framework. The EU’s implementation represents one of the most comprehensive attempts to integrate sandboxes into formal AI regulation.
Moltzau explained that the EU AI Office is developing implementation frameworks in collaboration with member states, with a draft Implementing Act for AI regulatory sandboxes expected for public consultation in autumn. The EU approach emphasises that SME participation should be free according to AI Act provisions, addressing equity concerns raised throughout the discussion.
Moltzau positioned sandboxes within evidence-based policy-making frameworks, noting that “exit reports are crucial for dissemination and getting value from sandbox investments.” He also mentioned cross-border collaboration potential, stating that “cross-border sandboxes can facilitate extensive collaboration on transport, health, and other sectors between regulatory environments.”
## African Perspectives and Emerging Markets
Jimson Olufuye, Chairman of AFICTA (Africa ICT Alliance), provided insights into Africa’s engagement with sandbox approaches. He noted the continent’s growing interest in AI applications as countries develop their digital strategies, emphasising that “regional cooperation is essential for products with countrywide and regional benefits.” Olufuye also referenced the Global Data Compact (GDC) in discussing international cooperation frameworks.
Maureen, identified as the Africa Sandboxes Forum Lead, provided ground-level insights into practical implementation challenges. She highlighted two critical issues: funding constraints and legal authority questions. Regarding funding, she noted that “funding challenges exist, with potential solutions including cost-sharing models between affected sectors.”
More fundamentally, she observed that “legal backing for sandboxing authority is often unclear and needs to be established,” representing a significant barrier as many regulators want to establish sandboxes but are uncertain about their legal authority to do so.
## OECD Analysis and International Frameworks
Natalie Cohen, Head of Regulatory Policy for Global Challenges at the OECD, positioned sandboxes within broader regulatory experimentation frameworks. She provided crucial context with a striking statistic: “Only 41% of countries trust governments to appropriately regulate new technologies, showing need for evidence-based collaboration.”
Cohen emphasised that sandboxes “require significant governance resources, clear eligibility criteria, testing frameworks, and exit strategies,” highlighting substantial overhead involved. She noted the importance of avoiding market distortions whilst supporting innovation: “Governments need to consider funding support for SMEs and avoid creating market distortions while providing participation incentives.”
## Academic and Research Perspectives
Moraes Thiago, a PhD Researcher at VWB in Belgium who also works at the Brazilian Data Protection Authority, introduced a critical dimension: meaningful civil society participation. He argued that “civil society and individuals affected by AI solutions need meaningful roles before, during, and after sandbox implementation.”
His perspective emphasised that sandboxes should consider “individuals that are having their personal data processed or that will be affected by these AI solutions, regardless if the personal data has been processed or not.” This broader conception challenges sandboxes to move beyond business-regulator dialogues to include those most impacted by AI systems.
Regarding documentation, Thiago noted that “exit report authorship varies between companies and regulators, with flexibility in approach depending on context.”
## Health Sector Applications
Jai Ganesh Udayasankaran, Executive Director at the Asia eHealth Information Network (representing 84 countries with 2,600+ members), provided insights into health sector applications. He emphasised that health sector sandboxes can address “universal health coverage, interoperability standards, and cross-border data sharing needs.”
Significantly, Udayasankaran challenged traditional regulatory paradigms by advocating for sandboxes as “collaborative spaces with hand-holding support rather than just gatekeeping,” suggesting a fundamental shift from adversarial compliance checking to collaborative capacity building.
## Trust Building and Stakeholder Relations
Bertrand de la Chapelle, Chief Vision Officer at the DataSphere Initiative, provided a crucial intervention addressing underlying trust deficits. He observed: “there are key words that we don’t dare to use, but that are very important in this discussion. One is mistrust… And we have to recognize that in the last 20 years, a huge amount of mistrust has grown between public authorities, private actors, and civil society.”
He positioned sandboxes as “one of the tools that brings the capacity of dialogue, particularly when the discussions are taking place very early on,” framing them as trust-building mechanisms rather than merely technical regulatory tools.
## Key Implementation Challenges
### Resource and Legal Framework Issues
Throughout the discussion, resource constraints emerged as a persistent challenge across different contexts. The African perspective highlighted particular challenges in developing economies, while European experiences demonstrated that even well-resourced regulatory systems face significant overhead requirements.
Legal uncertainty about regulatory authority for experimental approaches creates barriers to sandbox development across multiple jurisdictions. Many regulators expressed interest in establishing sandboxes but lacked clarity about their legal authority to engage in experimental regulatory approaches.
### Stakeholder Engagement
The discussion revealed significant challenges in ensuring meaningful stakeholder participation, particularly for SMEs and civil society organisations. While there was strong consensus on the importance of inclusive participation, speakers identified multiple barriers including resource constraints and complex application processes.
## Audience Engagement and Questions
The session included significant audience interaction through Mentimeter polling and Q&A. Giovanna from the Brazil Youth Program asked detailed questions about exit reports and documentation processes, highlighting young professionals’ engagement with sandbox development.
A representative from Vietnam inquired about policy packages and legislative features, demonstrating global interest in practical implementation guidance.
## Areas of Consensus and Disagreement
### Strong Consensus
The strongest consensus emerged around sandboxes’ collaborative nature, requiring meaningful participation from public sector, private sector, and civil society actors. All speakers agreed on the need for special SME support, including free participation and funding assistance.
There was universal acknowledgment that sandboxes are resource-intensive endeavours requiring careful planning, adequate funding, and proper documentation.
### Key Tensions
Speakers differed on implementation approaches, with some advocating supportive, collaborative approaches while others emphasised rigorous evaluation and market neutrality. Different regional perspectives proposed varying solutions to resource constraints, from cost-sharing models to government funding responsibility.
## Future Directions
Several concrete action items emerged, including the EU AI Office’s draft Implementing Act for public consultation and continued collaboration through the DataSphere Initiative’s coaching and master classes. The OECD committed to developing toolkits for sandbox implementation.
## Conclusion
The workshop revealed remarkable consensus on AI sandboxes’ value as tools for regulatory experimentation and innovation governance. Despite diverse geographical and institutional perspectives, speakers demonstrated strong alignment on fundamental principles including collaborative approaches, SME support requirements, and the value of cross-border cooperation.
The discussion successfully addressed broader challenges of trust-building and institutional legitimacy in technology governance. The recognition that sandboxes serve trust-building functions beyond their immediate regulatory purposes provides important context for understanding their growing global adoption.
Key challenges remain around resource allocation, legal framework development, and meaningful stakeholder engagement. However, the strong consensus on fundamental principles provides a solid foundation for addressing implementation challenges through continued collaboration and knowledge sharing.
The workshop’s collaborative tone and constructive engagement across different perspectives suggests that the sandbox community has developed effective mechanisms for knowledge sharing and mutual learning, potentially serving as a model for broader technology governance challenges requiring international coordination.
Session transcript
Sophie Tomlinson: Hello everybody and welcome to this workshop on AI sandboxes. Thank you so much for choosing to spend what must be your morning with us. My name is Sophie Tomlinson, and I’m the Director of Programs at the DataSphere Initiative. For people who aren’t familiar with our work, we are a think-do-tank working on data governance and sandboxes, working with businesses, governments, and civil society on how we can responsibly unlock the value of data for all. We’re here today to talk about how sandboxes and different types of experimental regulatory approaches can help us in using AI, in assessing whether we want to or need to use AI, and also approaching these governance questions that we face as we see AI penetrating different types of sectors. So what I’d like to just share with you before we get started is a QR code to a Mentimeter that we will be running. Please check out the QR code and go to the first question we have for you because we’d love to get your insights. We have a very diverse and an exciting panel today with many different speakers, and as you can see from this list we have a couple of people online, but also in person with us here in Oslo at the IGF. I’m going to introduce them as we go through the session, but as you can see all their names here. So first of all, what I’d like to start with is what is a sandbox, and what do we know about this as a concept and a tool for tech development and policy innovation? I’d like to hand over to Mariana Rosopaz, who is the research and project management lead at the Data for Initiative to give us a first look at what sandboxes are. and their potential for AI. So, Mary, over to you.
Mariana Rozo-Pan: Thank you, Sophie. And hi, everyone. Good morning, good afternoon, good evening. We are very excited about hosting this workshop. I think it’s like the third workshop that we host at the IGF focused on sandboxes. And for those that are here in person, I’d actually like to see a little show of hands of who here played in a sandbox as a kid, or maybe with Legos, with building blocks, building things. I see laughs and hands going up, even from the technical team, which is exciting. Well, I did, too. And I was actually quite obsessed with playing with Legos and building things. And one of the things that we realized when it comes to governing data responsibly and emerging technologies responsibly, is that we often forget how we used to play when we were kids. And as we were children growing up, we were actually quite excited about experimenting and about thinking about building things, building them, and then kind of destroying them and building something new again. And that flexible, agile mindset that maybe we had when we were children is what we’re often lacking when it comes to building agile regulations and shaping how we’re governing technologies, building technologies, and addressing the complex challenges that we’re facing nowadays. So, sandboxes, I would actually like for us to go to the mentee. Could we look into the answers of that first question that we had? Thank you. So, I’m seeing that people are answering collaboration, solution. That’s what comes to mind when you hear sandboxes, which is an exciting response, I must say. And that’s actually what sandboxes are all about. It’s about flexibility. It’s about collaborating. So, sandboxes are collaborative spaces, safe spaces for collaboration in which, by nature, different stakeholders come together to craft solutions, experiment with technologies. There are different types of sandboxes, as we will be more than happy to share more of later, but regulatory sandboxes are those in which the different stakeholders, the public, the private sector, and hopefully also civil society, experiment and test technologies against an existing or an in-development regulatory framework. And operational sandboxes are those in which different stakeholders test with the data or with existing technologies. Sandboxes can also be and they are all hybrid and we can go more into that. They were originally created within the finance sector to test financial technologies and they are now being used across sectors for AI, for health, for transportation and in many other use cases. And they are a promising methodology in the end that has already been implemented, again, across sectors and is being pretty effective in driving innovation and ensuring that we are doing things as we were when we were growing up. So I’m seeing very interesting responses, testing, collaboration, solution and if we go back to the slides that we had, I also wanted to share at the Datasphere Initiative we have been doing intense and extensive work around sandboxes and we’re sharing here our sandboxes for AI report which is our latest report focused on the potential of sandboxes for AI. We have a mapping that has identified over 66 sandboxes that now is around 150 focused on different topics and particularly on AI innovation and here you can also see a map of the distribution of sandboxes across the world which is a very exciting and interesting methodology that’s being implemented not only in developed countries but also in developing economies and in countries throughout the global south, in Latin America, in different countries in Asia and in Africa. So we’re seeing that this is a tool that’s proven interesting, successful and powerful when it comes to testing bold ideas in collaborative and safe spaces. And at the Datasphere Initiative we also have a methodology on how to do a sandbox that includes not only thinking about how to do them responsibly but about responsible design, effective communication and engagement and making sure that it is not only a space where specific startups or private companies have access to resources and testing and iteration but it’s also a space that in the end creates public value and translates into better technologies for our society in general. So that’s a bit of a snapshot of what we do and back to you Sophie for our interesting conversation today.
Sophie Tomlinson: Thank you Mariana. So why sandboxes for AI in particular? This is what we want to talk about now in this first session. And I’d like to welcome Meni who is the Digital Policy Manager at the International Chamber of Commerce to share her thoughts on this. So Meni, you’re working at ICC with businesses from all around the world across all sectors. What do you think are the types of AI governance approaches that are needed and how could sandboxes play a role in the context of AI? Thank you. Sorry, just taking this off.
Meni Anastasiadou: Thank you so much Sophie and many thanks for the wonderful invitation to participate at this session today. I am Meni Anastasiadou, I’m the Digital Policy Manager at the International Chamber of Commerce. For the colleagues that might not know us, we are the institutional representative of more than 45 million businesses across 170 countries. So we really have an inclusive membership that goes beyond sectors and geographies. So AI is really an incredible tool. We see it transforming industries all over the world and really providing productivity gains and improving efficiency and lowering costs for various different sectors and again shapes and sizes of businesses. So we also see this as I’ve mentioned being especially true for SMEs which are the backbone of the global economy and how we govern AI particularly impacts SMEs. So I really like the presentation that showed earlier how we should consider AI governance approaches that are fitting to multiple different sizes let’s say of stakeholders including of course businesses and this really speaks to the fact that we should be mindful of the approaches that we take when we’re talking about AI governance to ensure that we are inclusive and supportive to innovation. So to that point particularly, ICC has put forward a proposal for an AI governance framework that we call the four pillar approach. We publicize it through the ICC narrative on artificial intelligence in September of last year and the thought process that we present around this is that in order to ensure that we sustain the use of AI in a safe way that benefits different sectors, economies around the world, we really need to make sure that AI governance frameworks are harmonized with existing global agreements so that they don’t really create a patchwork of regulations which as we know makes it particularly challenging And this can help reduce barriers and compliance complexities. We should also make sure that AI governance frameworks are flexible and do not hinder investment. And of course that they create at the end of the day favorable commercial conditions that can support entrepreneurship. So back to my point on ICC’s four-pillar narrative or four-pillar approach to AI governance. So what we want to show is that by adhering to the ideas that I mentioned earlier, all businesses can really harness the power of AI to drive innovation and ensure compliance and build trust. So if we align AI governance with that in mind, we can really ensure that everyone is equipped to harness AI and accelerate their growth. Now regulatory sandboxes are really a great tool that actually respond to this framework of governance and they can really enable the safe and real-world testing of AI systems and particularly for SMEs. Mariana, you spoke earlier about how sandboxes were first used in fintech, but since then we have seen how their use has spread to cover other areas and covering an inclusive set of geographies. So I really like the mapping that you’ve shown us earlier, which really speaks to what important of a tool AI sandboxes are to the trustworthy and safe AI governance model. So just to give you even an example and speaking to the use of sandboxes and how those are effective for SMEs, just as we know how engineers, when they are in aerospace, they’re always testing on the ground how an airplane works and to make sure that it’s safe before it actually flies. And it’s the same idea, the same principle, making sure that we bring together all stakeholders to have the time to test if AI works, what are the right safeguards to apply, what are the right principles and guidelines to make sure are in place, that we make the complete use of all the benefits that AI has to offer. And this can eventually then help all deployers, developers, and users of AI to really take off and can help eventually also SMEs take off when they’re used just as airplanes do. So perhaps maybe I can stop here.
Sophie Tomlinson: Thank you for that analogy, I think it’s really helpful. And I’d like to move on to Alex Mosul, who’s the Policy Officer at the European AI Office. Alex, your role at the AI Office has a component which is very much focused on sandboxes. Could you share a bit of background on the role of sandboxes in the EU AI Act and why you are thinking of how we can actually use these tools?
Alex Moltzau: Yes, of course. I would be happy to, Sophie, and also thank you so much for those considerations, Manny. It’s really important, as you say, that we create favourable commercial conditions. I think it’s this balancing act of responsibility, but also innovation and how we get that right. Because as citizens, we want great products, but also safe products and services. So I’m just going to spend three minutes to talk about three things. The first is a bit of my story, and the second is this balancing act, and the third, what are we doing in the European Commission and this implementing act that we are working on. So I have a background as a social data scientist and also with artificial intelligence in public services. So I worked five years with AI policy nationally in Norway with the research community, with machine learning, artificial intelligence and robotics. However, I was also involved a lot of the time in this sandbox we had in Norway for privacy, but that had exclusively AI cases. and a lot of exit reports that you can find on the internet if you search for the privacy sandbox in Norway. So being here in Norway and being back, I now work in Brussels, and I’ve been there for one year with my family, working in the European AI office, and it’s been quite a journey to start a new place. But I have to say, it’s a really wonderful place to be if you’re interested in AI policy and law, and it’s brought me to think about the whole European region, right? And how do we get this balancing act right? Because I think as a region, we have an approach, we have certain values that we aspire to, and for us, I think we want to be treated in the best way possible, as citizens, as co-workers, as part of society. So I think it’s the case that if we want to have responsible innovation, we need an evidence basis to inform that policy, right? So if we don’t learn this regulatory learning, kind of like there are regulators that are building their competence on AI as we speak, and try to see what is the right way to ensure that we get this innovation that we want, but also in a way that fulfills citizens’ needs, and it’s not just based on a buzzword, or based on a promise that is unfulfilled. So to not waste money and time, we have to make sure that products actually work as intended, and the sandboxes are, I think, a really good mechanism, a good policy mechanism to do this. So what are we doing in the European Commission right now? We are really working together with member states. We have an AI regulatory sandbox subgroup under the AI board, so we work with member states on a very regular basis. We are writing an Implementing Act for AI regulatory sandboxes, and we are supporting the rollout of the sandboxes across Europe with the Coordination and Support Action EU-USAIR. So I think in that sense, there’s a wide range of things that we are doing, but right now it’s just sort of like, what frameworks are we looking at? And I have to say, in the autumn as well, we will be putting out this, because that’s part of the democratic process, for you to comment as well. So I just encourage everyone who is listening to keep track of when we are releasing this draft Implementing Act, so that you also can tell us about your opinion, because we are not the arbitrators of knowledge in a sense that we just want to understand how to do this in the best way possible, right? So it’s not necessarily true that Europe has all the best solutions. I think we have to look globally at how we can do this together, which is also why I’m here today.
Sophie Tomlinson: Thank you. Thanks so much, Alex. And as Mariana shared on her map, which maps different AI and data sandboxes around the world, sandboxes for digital policy challenges and new technologies aren’t just being looked at in Europe. It’s also a pretty global tool that’s being explored in Asia, and notably Singapore, also in Latin America, Brazil, Chile, and also in Africa as well. The DataSphere Initiative has a whole component on Africa and sandboxes, the Africa Sandboxes Forum. And as part of this work, we hosted a co-creation lab in Abuja, Nigeria, as part of the African Data Protection Conference that took place in May. And Nigeria itself is looking at developing a sandbox in the context of their data protection law as a way to help companies comply with this new data protection law. And I’d like to bring in Jimson, who is the chairman of AFICTA, which is one of the biggest private sector organizations in Africa and bringing together companies from all across the continent. Jimson, you were there in Abuja and have been working on regulatory innovation and technology for many, many years. Why do you think a sandbox is an interesting way to develop policy and innovation, and could you share a bit about how Nigeria is also thinking about this?
Jimson Olufuye: Thank you very much, Sophie, and good morning, everybody. My name, again, is Jimson Olufoye. I’m the chair of the Advisory Council of the Africa ICT Alliance, AFICTA. AFICTA was founded in 2012 with six countries in Africa, and today we cover about 43 countries in Africa. Our members are ICT associations, companies, and individual professionals across Africa. Well, that is my volunteer work, and for my day work, I run Contemporary Consulting. I’m the principal consultant there, and we’re into data center management, cyber security, integration, software, and research. So it’s really a great pleasure to serve in that capacity of our FICTA chair, our founder at that time, and even right now, I’m still very much involved in it. And I’m very, very happy to be associated with DataSphere and with the topic indeed. Yes, Sophie, we had a great event in Abuja last month. It was really spectacular. We had Morrayne, RISPA, and of course, you also did virtually. So it was a great capacity development event, and I want to congratulate you for that. And I can see Bertrand right there in the public. I appreciate all your work, really. You’ve been on it for quite a while, veteran indeed. So thank you for what you do. The concept of sandboxes is very, very important, very, very relevant, and very, very appropriate, especially in the AI sphere, because we all know that AI is the main thing, the hidden thing, and many people are concerned about kind of ramifications of AI, maybe for ARM. But we believe there’s a lot of good in it if it is properly regulated going forward. We do know that the whole essence of even our gathering here, IGF, as part of WSIS, is so that we can have a people-centered, inclusive information society. And the information society is still evolving, and AI is going to play a very, very important role, and that’s why that workshop, I was happy that I was part of it. We had regulators there. We had the Nigerian Data Protection Commission was there fully, and also the NCC, Nigerian Communication Commission, and also companies, AI companies, civil society people, academics, and quite a number. And it was really very rich. We look at the three aspects indeed, operational, regulatory, and hybrid, and we had case studies. That was quite interesting. And so the meeting aligned with even the expectation of the participants and the broad stakeholders in AFICTA, simply because Nigeria has just evolved its AI strategy. AI strategy was evolved, bringing all stakeholders to work on it. Now we’re going to AI law, and we need a regulation, and also it must start with data governance, basically. And that’s why the Nigerian Data Protection Commission took this very seriously, and they have considered that indeed they’re going to adopt it because it will help with proper regulation. Even some of us that develop applications, we also learned a lot that we can actually use it to be beneficial in terms of market reach, in terms of the kind of product we need to design, in terms of what customers want. And even there, we of course knew that the Central Bank of Nigeria actually adopted some form of sandboxes, even to regulate the financial sector. So it’s a rich concept, and I think we need to enrich it more. We need to keep the conversation going. Because right now, just less than 10 African countries have AI strategy. Less than 10. And from AI strategy, we need to move to AI regulation, and regulation is very key to direct products, because we don’t want products that are for harm. We want products that are for good, that will be beneficial to the people, and also bridge the digital divide, which is the main idea with WSIS and also with GDC. and of course for Sustainable Development Goals. So this lines up with WSIS GDC and the expectation for the achievement of Sustainable Development Goals. So I think that fully aligns with it. We will continue to support the advocacy and the engagement so that regulators can do the right thing and also our members too can know what is expected of them concerning their products. Thank you very much for this opportunity.
Sophie Tomlinson: Thank you so much, Jimson. Thanks for also putting what we’re talking about here today in the context of the wider world of Internet governance too and within the WSIS process. So now I’d like to actually just go back to the Menti again, if we can have a look at the second Menti question we have for all of our participants. This question is, we’d like you to have a think about the types of sectors and areas where AI is being applied or also the data governance point that Jimson mentioned is in there as an option. But what issue do you think would benefit most from AI sandbox? So what kind of sector do you think could be most helpful? So while those results come in, what I’d like to do now is move on to the next part of the discussion, which is looking at when and how do you actually do one of these sandboxes? We’ve heard in this first part a lot of excitement and interest of the potential for these tools. But if you’re a government and you’re actually starting to think, okay, how do I, do I have the resources to design and set up a sandbox? Or if you’re a company thinking, what are the incentives that I have to actually participate in a regulatory sandbox or set up one myself that could perhaps be an operational sandbox? Where do you start? I’d like to bring in Natalie Cohen, who’s the head of regulatory policy at the OECD to start us off. Natalie, what would you say, based on your research and what you’ve been doing with a diverse group of governments, what is the role of sandboxes within the regulatory process and what sort of challenges might governments face?
Natalie Cohen: Thank you very much, Sophie, and thank you for the opportunity. here today. Just to clarify, I’m Head of Regulatory Policy for Global Challenges specifically, and one of the things that I’m looking at is this answer of how do you regulate for new technologies and how do you regulate in a way that is innovation-friendly? And the OECD’s answer to that, I would say, is the R2021 recommendation on agile regulatory governance to harness innovation. And as part of that recommendation, we have a big focus on regulatory experimentation, and that sandboxes are just one aspect of regulatory experimentation. So I think a first consideration for a government is to look at the specific policy objectives they want to achieve and then what is the best way for them to achieve that, because regulatory experimentation can also mean just policy prototyping, it can mean innovation testbeds, or it can just mean using piloting powers to test different processes. So we think regulatory experimentation is really important, it helps policymakers, regulators and industry come together in a collaborative way, as Mentimeter pulled out, to shape and improve regulatory environments in a way that manages the tensions that can be created between regulation and innovation. And we think sandboxes are particularly well suited to regulatory experimentation, where companies are more towards the stage of early commercialization or on the point of bringing something to market, and they want to influence the regulatory framework around that and remove barriers to actually accessing the market, whereas some other forms of experimentation like innovation testbeds might be more around proof of concept. And as has been mentioned, sandboxes are not new, they have been around for a while and they have been used with success, specifically in the fintech sector, but at the OECD we kind of have two aspects to our work, we provide tools and guidance to help governments develop and build sandboxes, and we also provide technical assistance and support to countries to set up a sandbox, but sometimes also to fix a broken sandbox. So one thing I would like to say is they’re not always the perfect answer, they can be quite resource intensive to manage, they do require governance resources and they do have certain elements that need to be in place to ensure success. So for example, governments need to think about the eligibility criteria for what kinds of businesses and innovations they want to test and make sure that that is transparent, they need to be clear about the testing framework and the evaluation process that will be in place to make sure they actually have good evidence that can then go on to influence regulatory policy, and they also need to have a kind of exit ramp, so at the end of the sandbox, when do you close it down and what is the route for companies to then actually bring products and services to market on the back of that. So all of these things can So, we can require a lot of overheads both for the regulators who need to be funded and resourced and have the capability to manage that process, and also for the participating businesses. So, sometimes one thing governments need to think about is also providing the funding support to businesses, particularly if they want SMEs to come and participate. Some successful sandboxes have been successful in terms of testing products and services and bringing them to market, but they’ve been primarily successful with larger corporates. And so, sometimes what SMEs need support with is part of accessing, could be accessing data, it could be legal and compliance resources as well. So, that’s another thing to think about if you want to create a diverse and sustainable approach to sandboxes. So, I’ve mentioned a couple of the kind of the key issues around things that countries need to think about there. There are various functions to manage within sandboxes around the impact on competition and innovation. So, regulators and policymakers will be keen not to create market distortions, not to kind of overly favor the participants that play in sandboxes, while at the same time there need to be incentives for businesses to participate. They need to have some kind of benefit, whether that’s accelerating their route to market or providing them with enhanced support around some of those resourcing and funding considerations that I have mentioned. So, the OECD is in the process of publishing a toolkit on how to develop and design sandboxes that will come out in the coming weeks. And as I mentioned, we provide technical support to both members and non-member countries. So, we’ve done work on Croatia that has led to the development of this toolkit and we’re about to start a project with Portugal on one of their sandboxes too. So, I think another thing is countries might also need advice and support on how to deploy these things and that’s where they can reach out.
Sophie Tomlinson: Thank you very much, Natalie. Very helpful and lots of points you made that I really wanted us to come back on. So, we do, can we just get the results of the Menti to see the different sectors people thought could be particularly useful for AI sandboxes? Okay, yeah. I guess finances may be not surprising since thinking of how sandboxes kind of originated as a concept, but we can see health as well being a big one, which is good because we’re going to have some discussion on health a bit later in this session. Now, I’d like to bring into this conversation Moraes Thiago, who is a researcher at VOB in Belgium. Thiago, you’re researching sandboxes around the world and you also have some experience yourself participating and designing one. From what Natalie was saying in terms of some of the challenges that governments can face in actually setting up a sandbox or really deciding. whether or not this is the right type of regulatory experimentation tool. What could you say in terms of how governments can best manage resources to set up a sandbox, include transparency, maybe also bring in different types of stakeholders like civil society? Could you share some of your thoughts on this, please?
Moraes Thiago: Yeah. Hello, everyone. And thanks, Sophie, for the invitation, the invitation. It has been very nice to be engaging with the data sphere and other colleagues that I see here in several forums. And definitely being the IGF, it’s definitely relevant for such a topic. Just before starting as a very clear disclaimer. Yeah, I’m speaking today as a researcher, a PhD researcher at VWB, as you mentioned. But some of you might know me as well as a practitioner from the Brazilian Data Protection Authority. Today, I’m not speaking on behalf of them. But of course, as part of my role there, I’ve been working with several colleagues to launch a pilot sandbox. And hopefully there will be news on that soon. So, yeah, it will be a nice way to see how a young authority is dealing with this challenge of establishing something that can be very resource intensive. But at the same time, it is manageable if some care is taken. And maybe my comment then will be to complement a bit what Natalie said, but also to show the other side of the cup. It’s true that sandbox is not the only experimentation tool. And I think any regulator that wants to establish one has to think and consider if, how, why, right, when. All the questions that we’re discussing here to establish a sandbox. But one thing that we have also learned and I’ve seen based on the experience that different jurisdictions have been doing, is that it’s quite common when you’re still testing the waters, you sandbox the sandbox. So basically, you create a pilot. And these pilots, several times, you deal with the resource that you have to decide the scope and how broad your sandbox will be. This means, for example, if you rely more on your internal staff and the expertise they already have, or if you will have some kind of partnership or some specific experts, consultants, like contracting. So all of this will depend on your conditions, of course. But there are several institutions that are actually supporting such initiatives. So, for example, at international level, we have like development banks, like in Latin America, we have CAF. There’s also CEPAL and other institutions around the globe are also trying to somehow engage in support. So this is a way of dealing with a bit of this challenge of limited resources. So in the end, the word cooperation is really important here. So it makes a lot of sense to be in the IGF discussing about that. And maybe as this first part of my speech, one other thing that I believe it’s very important to consider as you frame who will be your co-partners in this endeavor and how you’re going to establish, for how long, because some sandboxes can be quite short. there are even cases of like three months sandboxes, there are others that go very long, like five years, but in general, and there’s a lot of global reports on that and academic research on that, that shows that on average we are aiming for like six months to one, two years, it really depends on the goal of what you’re testing, right? And you can actually have also flexibility of how many projects, how many use cases you’ll be dealing at the same time. So all of this is part of the design of the sandbox, very important to consider. So my last comment for now would be to touch also what Sophie just mentioned, that many times when we’re talking about several stakeholders that are being engaged, either participants or partners, we actually forgot many times the role that civil society has here, especially when we are now moving to this arena of sandboxing in AI and sandboxing AI in several circumstances. We’re talking about individuals that are having their personal data processed or that will be affected by these AI solutions, regardless if the personal data has been processed or not. And because of that, I think it’s very important to also hear the voice of these individuals and maybe this is something that we need to improve in our framework. So what will be the role throughout all the sandbox experience? Because the civil society and individuals, they might have an important role before, during and after the sandbox is done. And this is actually what I’m researching right now. So for now, I only bring this as a provocation, but I hope in the future, as we continue engaging in this for us, I may be able to also share some insights of what I found, the potential role of civil society here. But I would be glad to know other colleagues’ comments on that. Thank you
Sophie Tomlinson: Thank you so much for that, Tiago. You covered a lot, which I think we’re going to have time to come back to. I’d like to also bring in now Maureen. and Amoturine who is our Africa Sandboxes Forum Lead at the DATS4 Initiative. Morrayne has been doing a lot of work researching in Africa how sandboxes are being used, interviewing private sector who have been participating in sandboxes, also government setting them up. And Morrayne, could you maybe give us a bit of a, picking on also what Natalie was saying in terms of some of the barriers and thinking that governments are needing to do on how to actually go about setting up a sandbox and also thinking of the companies themselves joining. What could you say from Africa has been some of your lessons as you’ve been researching this and could you also mention a bit about the types of training and support that the DATS4 Initiative is also providing to governments who are planning to set up sandboxes?
Participant 1: Sure, happy to Sophie, thank you so much and I’m really glad to be here and to see you all who are participating. So, let me start by just sharing a few numbers. So, we’ve looked into sandboxes in Africa and overall, at least from the last time that we updated the mapping, which was sometime earlier this year, we have about 25 national sandboxes. And of these, 24 are in the finance sector. So, which would mean that authorities and public or what we call government authorities are now starting to get into sandboxing. So, it’s a new space, a space which they have to identify quite a number of the core elements of sandboxes. The beauty is from the conversations that are happening on the continent, regulators have really embraced the idea of experimentation when it comes to regulating these emerging technologies like AI. And so they’re really embracing the idea of sandboxes. But from what we are learning is there are still questions, really, when it comes again to the core elements of, you know, the how, who, when. And, you know, all the details that go into sandboxing are some of the things that they are grappling with, because when you realize that it is a new space that they are getting into, because sandboxes have largely been used in the fintech sector. And so part of what we have been doing is, of course, learning from what is available online. Who is sandboxing? Do they have lessons to share with those who are getting into the space of sandboxing? And that we have documented in the report that Mariana shared earlier, the Outlook Africa Sandboxes Outlook report. But we are also going ahead to engage with stakeholders and largely, so far it’s been largely regulators, but we are also starting to engage with private sector and other types of stakeholders to now start thinking about the core elements of sandboxing. I mean, things like the scope of a sandbox, who are the people you’re going to work with, the actors and the stakeholders, then the legal models under which authorities can sandbox, because that is also not yet clear. But also now looking into the resources, which is a huge part of sandboxing, because you will not, we have learned that a number of regulators are indeed grappling with the idea of how does a sandbox get funded? Where does the funding of a sandbox come from? So those are the areas in which we are trying to engage with people. And the idea of raising funds for sandboxing has really had different approaches in different places now. But also what we’re doing as we’re doing the activities of the Africa Sandboxes Forum is we are learning a lot from sandboxes that are being operated outside of Africa to see what has worked. And so you will notice that some sandboxes that are run by public institutions or authorities either have their funding coming from the core operations of an authority, say it’s a data protection authority. So there is that. But what. What we are exploring in Africa, because that’s not yet the case that there is co-funding for experimentation in some of these authorities, is under the co-creation activities that we are doing with different stakeholders, is to get people to put themselves in the shoes of someone setting up a sandbox and think about, okay, who would this sandbox affect when it comes to other regulators or other sectors? If it’s a data protection authority setting up a sandbox, then they are thinking about what other sectors is this sandbox going to affect and can we bring these regulators in and think about some cost-sharing models that are, of course, where there is a shared benefit but also shared costs for the different sectors that are involved in such a sandbox. And then the other thing that we are also trying to make sure that we brainstorm around with stakeholders is the legal models under which they can sandbox, because sometimes it’s not clear and we learned that it’s actually one thing that regulators grapple with. Sometimes it’s not clear that they actually are authorities allowed to sandbox in any way. So, how do they look to find that legal backing to carry out such an experimentation? And if it’s not there, then how can they go about that? So, these are questions that most regulators and stakeholders have not yet started thinking about while they know they want to sandbox. Thinking about how to see how do they actually approach it has been a challenge and that we have seen in a number of people that we’ve been co-creating with. And drawing from, say, the Chigali co-creation lab that we did, we learned from stakeholders that they really would love to use sandboxes to sort of understand. and Jai Ganesh Udayasankaran. So we are trying to understand if indeed some of the hype around AI, for example, is true for Africa, understand the real value of what some of these emerging technologies are bringing so that they are able to take them to the next level. And so part of that is really what’s been taking our time in Africa to sort of engage stakeholders and understand where they are at and how ready they are to implement them. I just wanted to mention that part of what we’re doing are, of course, group co-creation activities, but we are also offering services such as one-on-one coaching journeys for someone who is ready to sandbox, and they want to navigate the journey of all the core elements of sandboxing that are not necessarily direct. That is also what we’re doing. We are conducting master classes with groups of stakeholders that are ready to learn how to technically run a sandbox. That is also part of the activities that we are looking into, because the need for sandboxes is already there and has already been recognized by regulators. So now what’s really missing is that push to the next level. So working with them into creating these sandboxes and navigating these challenges around resources, which we know are key almost everywhere.
Sophie Tomlinson: Thank you, Maureen. Thank you. Thank you, Maureen. Thanks for also sketching out the different activities and sandbox support that we can also provide at the Datasphere. So I’d also like to, first of all, just to mention, if people in the room want to make a comment or a question, there’s two microphones either side of the room. If you want to go over there, please feel free. While people have a think about that, I’d also like us to bring up the next Menti question we have as well, which picks up on some of this discussion about we want to collect the kind of challenges and barriers that people may have when they’re thinking of whether and how to. And now I’d also like to bring in a perspective from the health sector, and I think this is timely because many of you highlighted this as a key sector where testing AI technologies and policy through sandboxes could be useful. So Jai, I can see you’re online, Jai’s connecting here from the Asia eHealth Information Network. Could you talk to us a bit about where you think the potential for sandboxes are in the health sector, and could you particularly touch on how they could be useful in a cross-border context as well, because I know that you’re doing a lot of this in terms of your convergence work at the Asia eHealth Information Network, so over to you Jai, please, and Jai is the Executive Director at the Asia eHealth Information Network. Jai, can you hear us?
Jai Ganesh Udayasankaran: Yes, Sophie, thank you.
Sophie Tomlinson: Great, thank you.
Jai Ganesh Udayasankaran: First, I would like to actually quickly introduce about our network, which is Asia eHealth Information Network, which is a regional digital health network with core focus on the Southeast Asia and Western Pacific in terms of the World Health Organization regions, but we do have members across 84 countries, over 2,600 members from 84 countries. Our primary focus is capacity building and then also support for the national digital health programs that we work with the governments in the countries and supporting them in terms of the core health information building blocks, and then supporting them in terms of the gaps that are existing currently in terms of the governance, architecture, people and program management, standards and interoperability. So I think many of the speakers ahead of me have mentioned about various challenges, so one of the… challenges is like who should be involved in the sandboxes and then who is actually qualified to take the decisions of course the regulators mostly and then also from the government point of view they usually are the ones who actually start or decide on the sandboxing criteria but then like we have also had recent discussions how like I think Thiago also mentioned about it how civil society could actually participate meaningfully but then like coming back to your primary question like we have seen like we work extensively with countries now like we have official representations from 15 countries and two more are likely to join in what is known as the working council working council is nothing but representation from countries which advise the AHINTS board of directors as well as our operations in the region so we have seen sandboxes in health sector especially on AI, telehealth and then also for data governance and data sharing but three things have coming up in the recent times one is of course many countries do have the universal health coverage programs where the sandbox environment actually helps them to get the applications developed by private sector also to be getting on the mainstream as long as they confirm to the standards that are set by the regulators in most of the countries as we are aware the digital developments have been very very very fast paced whereas the regulations especially from the health sector has been decades old they are still you know there’s this pacing issue of catching up with the developments in the regulatory space so in these emerging technologies does need the support in terms of the sandboxes but in many countries this is also my experience that they don’t necessarily say a regulatory sandbox sometimes they call it testbed sometimes it’s a living lab testing environment, and then also regulatory sandbox. So many of them use multiple terms, depending on the priorities and the local needs. So the three most sought after needs are, like to get applications, different applications, health applications developed even by private sector into the national mainstream, confirming to the regulations where the regulations are currently still not there. And then also to shape the regulations and policy space. And then the second one is on interoperability because most of the solutions need to be actually interoperable. There are standards, but still there are sandboxes that are set up to make sure that the solutions actually confirm to the standards. So interoperability is the second use case. And the third one is about the data. We have occasions where there are countries from which, like for example, medical tourism, as well as like people going for treatments in other countries. So there is a need for information to be shared at the same time in a very responsible way. So these are the broad areas in which we see sandboxes in our region. And then, in fact, like we do have a country currently discussing with us their need for sandboxes and then probably support. And they have expressed several challenges also. So in fact, we look forward to work together with Datasphere and other partners, especially those who are willing to support us in terms of the funding and capacity building in this space to work together. I hope I answered your question. If not, please let me know. Thank you, Sophie and colleagues.
Sophie Tomlinson: Thank you so much, Jai. I think that really provides a good kind of snapshot of the different types of considerations, people and experts in the health sector, especially in the Asia region are thinking when it comes to how we can make the most of these different types of technologies for interoperable and cross-border health approaches. We’ve got also now some of the different points that we’ve heard from the Menti in terms of the different barriers. I’d also like to just note a question, a very useful question that we’ve had on the chat from a representative from the Institute for Policy Studies and Media Development in Vietnam. This is a think tank specializing in digital technology policy. Their question is around how do countries design policy packages and govern sandboxes? Should AI and data sandboxes be structured separately or integrated? And what types of legislative or regulatory features have proved the most effective in making participation more accessible to businesses and especially SMEs? So as we go now into our final set of interventions from speakers, I can’t see that anyone in the room wants to make a comment, so I’ll keep going. I’d like to invite Alex again to share some of his reactions to what he’s been hearing throughout the discussion today. The types of barriers that people have looked at when it comes to designing sandboxes and also linking to the cross-border potential of sandboxes that Jai was talking about. Could you tell us a little bit about how you’re also thinking of this in the context of the EU AI Act as well? That could be helpful. Over to you.
Alex Moltzau: Yes, of course. It’s really great to listen to all these different perspectives. And I think I could start with the last questions, I mean, especially relating to kind of like how to facilitate SMEs and startups. And in the EU AI Act, it is fairly explicit as well that the participation for SMEs and startups should be free. So I think this is kind of like, I guess, one mechanism, of course, if it’s if already a startup and SME or SME have kind of overheads, then it could, of course, be challenging to participate. And I think. I think Thiago’s questions about civil society, I think one of the really wonderful things about sandboxes is that and this is also a conception in the AI Act that we have these sort of exit reports so I think like dissemination activities and involving a broad set of stakeholders in kind of thinking about what did we learn, you know because this is, I think OECD outlined as well that this can have a cost, you know so to get value out of the money that is being spent on these sandboxes I think one should not ignore the importance of dissemination activities so I mean in many ways sandboxes were created as a measure to try to ensure responsible innovation and I think just talking about what is the irresponsible innovation or potential for irresponsible innovation, right and in the finance sector, you know, with 2008 and the financial crisis, you know I mean collateralized debt obligations So, there’s something wrong with the sound I’m sure they’ll fix it Okay, things are better Let’s see, this is now hopefully better Yeah, that sounds good Okay, wonderful Yeah, so I think with the financial crisis and collateralized debt obligations and kind of like what are kind of irresponsible innovations in a way as well and what way can you explore this in a sandbox so I think in a lot of ways what we are coming to realize is that AI affects us all, you know, across regions so in a sense, you know, what can we do to really unite across borders, you know and like this is also why kind of like these sort of joint AI regulatory sandboxes as a policy mechanisms I think were conceived of in a sense, you know, to see kind of like can there be really kind of extensive collaborations on like transport or health or like other aspects and could sort of like leading regulatory environments kind of come together to really try to dive into that and figure that out so this is kind of like part of what we are going to explore and this is also rich in data written into the AI Act itself, but also into the Interoperable Europe Act. There was kind of a mention of cross-border sandboxes. So I think we will see over the coming years this new type of experimentation. You know, and I think like what I can say right now is that that we are kind of starting to facilitate that and we will be working on the rollout of that. So kind of like any type of engagement with this always, I think, will be welcome over the coming years.
Sophie Tomlinson: Thank you. Thank you, Alex. I can see we have one question from the floor, which is great. Before we go to you, I just want to get Jimson to share some reactions to what Alex has been saying and what we’ve discussed so far, especially about business incentives when it comes to participating in sandboxes. Jimson, knowing the members of Eficta, what do you think they, what kind of questions would they have and how could you incentivize business to participate in sandboxes? Would it be, as Natalie’s saying, there would need to be, you know, perhaps funding for some SMEs? Would there be some incentive if the sandbox was actually going to be kind of cross-border in nature? So actually looking at interoperability between different African countries? Where do you see some of these questions that people have been asking?
Jimson Olufuye: Yes. Thank you very much, Sophie. The discussion has been very fluid and very useful and highly relevant. You know, to really operationalize sandboxes, you know, it requires a lot of stakeholders, a lot of interest. And importantly to the SMEs, it needs some coordination, and that’s why Eficta is there. And we are engaged in terms of creating the necessary awareness, especially in terms of members that want to create products that has countrywide and regional wide benefits. So in this regard, of course. We know that we need to fast-track development, and that is why we need all the partners in terms of funding, in terms of engagement, and in terms of appropriate regulatory directive framework, like Alex mentioned, the AI Act, and the process of bringing that together, which is quite well-established in the EU now. We really want a similar thing happening across Africa with AU, UNECA, in terms of their projects, like maybe identity projects across Africa, in terms of data structuring, so that SMEs can be involved at the initial. So meaningful participation, and then we can produce products that are highly relevant and useful for the society. Thank you.
Sophie Tomlinson: Thank you, Jimson. Could we please go to the question in the room? Thank you.
Audience: Yes. Can you hear me?
Sophie Tomlinson: Yes.
Audience: Perfectly. Okay. Hi, my name is Giovanna. I’m at IGF as part of the Brazil Youth Program. I’m one of the facilitators, and it’s been an amazing discussion. Thank you very much to Datasphere for putting this discussion together. I have a question about the exit reports, and about the documents that might be needed to be drafted during the sandbox implementation. And then asking if you have some advice for governments or other public institutions that might be setting up a sandbox, because I believe that drafting these reports will be a lot of work. And I have some concerns as to, you know, like the authorization of them specifically. Who will do it? What are the roles of the private companies, if any, in drafting those? What are the goals in actually having them not only as part of like creating a history and documenting the activity, but also to propose interpretation and paths forward? Thank you.
Sophie Tomlinson: Great question. Thank you so much. I’m going to take another comment from the floor, and then we’ll address those. Bertrand?
Audience: Thank you, Sophie. I’m Bertrand de la Chapelle, and it’s less a comment from the floor, actually, because for full disclosure, I’m attached with the Datasphere initiative. I’m the Chief Vision Officer. I just wanted to highlight and make an additional comment. There are key words that we don’t dare to use, but that are very important in this discussion. One is mistrust. And we have to recognize that in the last 20 years, a huge amount of mistrust has grown between public authorities, private actors, and civil society. Sandboxes are one of the tools that brings the capacity of dialogue, particularly when the discussions are taking place very early on. And in the mapping that the data sphere has done, we see certain countries that are using sandboxes not only for compliance verification or for pure regulatory aspects, but also to understand better between the different actors what are the parameters of a particular sector. The second thing is, the second word is anxiety. There is a little bit of anxiety about this new tool. The methodology is not completely stabilized and there is a risk. This is not the way operators are functioning. And there are questions of who is taking the lead in one organization? How is the distribution of responsibilities? And I think the work that the European Commission and the AI office in particular is doing in trying to shape how those things are going to be handled. The work that we’re doing at the data sphere through something that we launched, which is a global sandboxes forum, which is a space for exchanging practices around us, is helping in that regard. And I have here something that I’d be happy to distribute regarding the observatory that we have launched that documents all the experiences around the world on sandboxes that we’ve documented. Thank you.
Sophie Tomlinson: Thank you so much, Bertrand. Jai, I see you have your hand up. We have seven minutes left. So yeah, if you want to perhaps answer some of the questions that Giovanna put forward, that would be great or build on anything. Thanks.
Jai Ganesh Udayasankaran: Thanks, Sophie. I just wanted to quickly add what was shared by the speaker from data sphere. I think most of the times we look at sandboxes as something like, OK, regulators are the ones who own it. And then there is a kind of an entry, like a gatekeeping. But then why not look at the sandboxes in terms of being a creative or collaborative space where we actually help the entrepreneurs, because innovation is really required. And then there are funding constraints or resource constraints. I think that’s universal, irrespective. So why not we use this space as a… as an environment where there is a bit of a hand-holding and support that comes from the regulators or the governments or the academy, like how we can help those innovations that are coming in the space to actually meet the requirements or that meet the expectations in terms of trust rather than just being gatekeeping. That’s my thought. And then Aheen also uses this approach known as convergence methodology where we bring the various stakeholders within the country as well as those who are in the country.
Sophie Tomlinson: Thank you, Jai. Sorry, I just want to pause because we’ve got six minutes left and I just would love Thiago to come in to perhaps answer Giovanna’s point on the different types of exit reports. I think that’s something that I just want to make sure we answer that question and I think Thiago would have some ideas on that. Thiago, do you want to maybe share your perspective on that and as an ending comment from you as well? In one minute, if possible. Yes, of course.
Moraes Thiago: I know time is night. So, going straight to the point, I actually am finding fascinating how different regulators have been dealing deeply with the exit reports. So, in some cases, exit reports have been drafted by the companies and then many times it becomes more internal knowledge for the regulator. But in other contexts, the regulator has decided to take the lead on that. Like I can give as an example the experience in the Norwegian EPA. Also, the ICO, several times they have been the one drafting the main exit report. And then, of course, they do some assessment with the participants to be sure that there’s nothing there that’s being shared that should not be disclosed. But actually, the way these exit reports, these public ones have been published, they really cover more about the experience itself than about sensitive confidential issues, which is what the idea should be. And we also see that in the IAC proposal. So, I think it really depends on how you’re going to deal with the exit report, but there’s definitely room for flexibility here as well.
Sophie Tomlinson: Thank you, Tiago. And Natalie, I wanted to bring you in as a final kind of wrap-up thought for us since we’re quite short on time. Bertram mentioned And I think that’s something that the IGF this year, there’s a lot of, you know, trying to build trust, build and do a lot of trust building and, you know, kind of support international collaboration as much as we can. How do you think sandboxes can help build this trust at a cross-border level?
Natalie Cohen: Yeah, I think this issue of trust is key. One thing the OECD does is a driver of trust in government survey. And I think on the proportion of countries that reply to say that they have trust that governments will appropriately regulate the new technologies was only about 41%. So that shows that the trust is definitely low. I think regulatory experimentation builds the evidence base for making regulatory reform in an area where the risks are not fully understood regulatory attempts on AI are still early stage. And so a lot is being mentioned about the risk to society to the environment of AI, as well as the obvious economic and innovation benefits. And so I think it’s that collaboration element, it’s creating a space where regulators and businesses and civil society and a range of stakeholders can dialogue and actually build the evidence base together in a way that can then inform and influence a regulatory regime.
Sophie Tomlinson: Thank you. Thank you so much, Natalie. And thank you everybody for taking the time. I know a 9am session is sometimes not the easiest one to get to at the IGF, especially after an IGF music night. So thank you all so much for being here. Thank you as well for all the people who joined us online. Your time and expertise and questions shared was really valuable to us as we try to understand more about how people are thinking about regulatory experimentation, particularly sandboxes. And yeah, thank you for joining us and hope to see you all soon.
Mariana Rozo-Pan
Speech speed
150 words per minute
Speech length
727 words
Speech time
288 seconds
Sandboxes are collaborative safe spaces for experimentation where stakeholders test technologies against regulatory frameworks
Explanation
Sandboxes are collaborative spaces where different stakeholders come together to craft solutions and experiment with technologies. Regulatory sandboxes specifically allow public and private sectors, along with civil society, to test technologies against existing or developing regulatory frameworks.
Evidence
Interactive audience participation showing people associate sandboxes with collaboration and solutions; childhood Lego building analogy demonstrating flexible, experimental mindset
Major discussion point
What are AI sandboxes and their potential applications
Topics
Legal and regulatory
Sandboxes originated in fintech but now span multiple sectors including AI, health, and transportation across 150+ implementations globally
Explanation
While sandboxes were originally created within the finance sector to test financial technologies, they are now being implemented across various sectors. The DataSphere Initiative has mapped over 150 sandboxes globally focused on different topics, particularly AI innovation.
Evidence
DataSphere Initiative mapping identified over 66 sandboxes that grew to around 150; global distribution map showing sandboxes in developed and developing countries across Latin America, Asia, and Africa
Major discussion point
What are AI sandboxes and their potential applications
Topics
Legal and regulatory | Development
Sandboxes are promising tools for testing bold ideas in collaborative environments that create public value
Explanation
Sandboxes provide a methodology for testing innovative ideas in safe, collaborative spaces that go beyond benefiting specific startups or private companies. They are designed to create broader public value and translate into better technologies for society in general.
Evidence
DataSphere Initiative methodology focusing on responsible design, effective communication and engagement, and ensuring public value creation
Major discussion point
What are AI sandboxes and their potential applications
Topics
Legal and regulatory | Development
Meni Anastasiadou
Speech speed
139 words per minute
Speech length
676 words
Speech time
289 seconds
Regulatory sandboxes enable safe real-world testing of AI systems, particularly beneficial for SMEs
Explanation
Sandboxes provide a mechanism for safe, real-world testing of AI systems, which is especially valuable for small and medium enterprises. They allow businesses to test AI technologies with appropriate safeguards before full market deployment.
Evidence
Aerospace engineering analogy – engineers test airplanes on the ground before they fly to ensure safety; ICC’s four-pillar approach to AI governance framework
Major discussion point
What are AI sandboxes and their potential applications
Topics
Legal and regulatory | Economic
Agreed with
– Alex Moltzau
– Natalie Cohen
– Jimson Olufuye
Agreed on
SMEs need special support and consideration in sandbox participation
AI governance frameworks need to be harmonized, flexible, and supportive of innovation while reducing compliance complexities
Explanation
Effective AI governance requires frameworks that are aligned with existing global agreements to avoid creating a patchwork of regulations. These frameworks should be flexible enough not to hinder investment while creating favorable commercial conditions for entrepreneurship.
Evidence
ICC’s four-pillar narrative on artificial intelligence published in September; ICC represents more than 45 million businesses across 170 countries
Major discussion point
Why sandboxes are needed for AI governance
Topics
Legal and regulatory | Economic
Alex Moltzau
Speech speed
157 words per minute
Speech length
1214 words
Speech time
461 seconds
Responsible innovation requires evidence-based policy making, and sandboxes provide regulatory learning opportunities
Explanation
To achieve responsible innovation, policymakers need an evidence base to inform their decisions rather than relying on buzzwords or unfulfilled promises. Sandboxes serve as a mechanism for regulatory learning, helping regulators build competence on AI while ensuring products work as intended.
Evidence
Background as social data scientist with AI policy experience in Norway; involvement in Norwegian privacy sandbox with exclusively AI cases and published exit reports
Major discussion point
Why sandboxes are needed for AI governance
Topics
Legal and regulatory
Sandboxes help balance innovation with responsibility, ensuring products are both great and safe for citizens
Explanation
Sandboxes address the balancing act between promoting innovation and ensuring responsibility in AI development. As citizens want both great products and safe products/services, sandboxes provide a mechanism to achieve both objectives simultaneously.
Evidence
European Commission’s work on implementing act for AI regulatory sandboxes; AI regulatory sandbox subgroup under the AI board working with member states
Major discussion point
Why sandboxes are needed for AI governance
Topics
Legal and regulatory | Human rights
SME participation should be free according to EU AI Act provisions
Explanation
The EU AI Act explicitly states that participation in sandboxes for small and medium enterprises and startups should be free. This provision aims to remove financial barriers that might prevent smaller companies from participating in regulatory experimentation.
Evidence
EU AI Act explicit provisions regarding free participation for SMEs and startups
Major discussion point
Stakeholder engagement and participation
Topics
Legal and regulatory | Economic
Agreed with
– Meni Anastasiadou
– Natalie Cohen
– Jimson Olufuye
Agreed on
SMEs need special support and consideration in sandbox participation
Exit reports are crucial for dissemination and getting value from sandbox investments
Explanation
Given the costs associated with running sandboxes, exit reports and dissemination activities are essential for extracting value from the investment. These reports help involve broader stakeholders in understanding what was learned from the sandbox experience.
Evidence
EU AI Act conception of exit reports; reference to 2008 financial crisis and collateralized debt obligations as examples of irresponsible innovation
Major discussion point
Documentation and knowledge sharing
Topics
Legal and regulatory
Agreed with
– Natalie Cohen
– Moraes Thiago
– Participant 1
Agreed on
Sandboxes require significant resources and careful planning to be successful
Disagreed with
– Moraes Thiago
Disagreed on
Exit report authorship and responsibility
Cross-border sandboxes can facilitate extensive collaboration on transport, health, and other sectors between regulatory environments
Explanation
Joint AI regulatory sandboxes are conceived as policy mechanisms to enable collaboration across borders, particularly in sectors like transport and health. This approach allows leading regulatory environments to work together on common challenges.
Evidence
AI Act and Interoperable Europe Act mentions of cross-border sandboxes; European Commission facilitation of cross-border experimentation rollout
Major discussion point
Sector-specific applications and cross-border potential
Topics
Legal and regulatory | Infrastructure
Agreed with
– Jimson Olufuye
– Jai Ganesh Udayasankaran
– Sophie Tomlinson
Agreed on
Cross-border and regional cooperation enhances sandbox effectiveness
Jimson Olufuye
Speech speed
124 words per minute
Speech length
914 words
Speech time
440 seconds
AI regulation must be people-centered and inclusive, with sandboxes helping bridge the digital divide
Explanation
AI regulation should align with the vision of a people-centered, inclusive information society as envisioned by WSIS. Sandboxes can play a role in ensuring AI development serves to bridge the digital divide and achieve Sustainable Development Goals rather than create harmful products.
Evidence
AFICTA covers 43 countries in Africa; Nigeria’s AI strategy development and move toward AI law; Central Bank of Nigeria’s adoption of sandboxes in financial sector regulation
Major discussion point
Why sandboxes are needed for AI governance
Topics
Legal and regulatory | Development | Human rights
African countries are embracing experimentation for emerging technologies, with less than 10 having AI strategies currently
Explanation
While African countries are showing interest in regulatory experimentation for emerging technologies like AI, the continent is still in early stages with fewer than 10 countries having developed AI strategies. There’s a need to move from strategy development to actual AI regulation.
Evidence
Less than 10 African countries have AI strategy; Nigeria’s recent AI strategy development and progression toward AI law; AFICTA’s representation across 43 African countries
Major discussion point
Sector-specific applications and cross-border potential
Topics
Legal and regulatory | Development
Private sector coordination through organizations like AFICTA is essential for meaningful SME participation
Explanation
Organizations like AFICTA play a crucial role in coordinating private sector engagement and creating awareness among members who want to develop products with countrywide and regional benefits. This coordination is essential for operationalizing sandboxes effectively.
Evidence
AFICTA founded in 2012 with six countries, now covering 43 countries; members include ICT associations, companies, and individual professionals
Major discussion point
Stakeholder engagement and participation
Topics
Economic | Development
Agreed with
– Meni Anastasiadou
– Alex Moltzau
– Natalie Cohen
Agreed on
SMEs need special support and consideration in sandbox participation
Regional cooperation is essential for products with countrywide and regional benefits
Explanation
To fast-track development and create products that have broader impact, regional cooperation is necessary. This includes coordination between organizations, appropriate funding, engagement, and regulatory frameworks similar to what exists in the EU.
Evidence
Reference to AU, UNECA projects like identity projects across Africa; need for data structuring to enable SME involvement from initial stages
Major discussion point
Sector-specific applications and cross-border potential
Topics
Development | Legal and regulatory
Agreed with
– Alex Moltzau
– Jai Ganesh Udayasankaran
– Sophie Tomlinson
Agreed on
Cross-border and regional cooperation enhances sandbox effectiveness
Natalie Cohen
Speech speed
150 words per minute
Speech length
994 words
Speech time
397 seconds
Sandboxes require significant governance resources, clear eligibility criteria, testing frameworks, and exit strategies
Explanation
Successful sandboxes are resource-intensive and require careful planning including transparent eligibility criteria, clear testing frameworks, proper evaluation processes, and defined exit strategies. Without these elements, sandboxes can fail to achieve their objectives.
Evidence
OECD R2021 recommendation on agile regulatory governance; OECD experience providing technical assistance to fix broken sandboxes; upcoming OECD toolkit on sandbox development
Major discussion point
Implementation challenges and resource considerations
Topics
Legal and regulatory
Agreed with
– Moraes Thiago
– Participant 1
– Alex Moltzau
Agreed on
Sandboxes require significant resources and careful planning to be successful
Governments need to consider funding support for SMEs and avoid creating market distortions while providing participation incentives
Explanation
Successful sandboxes require balancing act between providing incentives for business participation without creating unfair market advantages. SMEs often need additional support including funding, data access, and legal/compliance resources to participate effectively.
Evidence
OECD observation that some successful sandboxes primarily benefited larger corporates rather than SMEs; need for diverse and sustainable approach to sandboxes
Major discussion point
Implementation challenges and resource considerations
Topics
Legal and regulatory | Economic
Agreed with
– Meni Anastasiadou
– Alex Moltzau
– Jimson Olufuye
Agreed on
SMEs need special support and consideration in sandbox participation
Disagreed with
– Jai Ganesh Udayasankaran
Disagreed on
Primary purpose and framing of sandboxes
Only 41% of countries trust governments to appropriately regulate new technologies, showing need for evidence-based collaboration
Explanation
OECD research shows low levels of public trust in government’s ability to regulate new technologies appropriately. This trust deficit highlights the importance of collaborative approaches like sandboxes that bring together multiple stakeholders to build evidence-based regulatory approaches.
Evidence
OECD driver of trust in government survey showing only 41% trust in appropriate regulation of new technologies
Major discussion point
Trust building and collaboration
Topics
Legal and regulatory | Human rights
Regulatory experimentation builds evidence base for reform in areas where risks are not fully understood
Explanation
In emerging technology areas like AI where risks are not fully understood and regulatory attempts are still early stage, regulatory experimentation provides a collaborative space for building the evidence base needed to inform regulatory reform. This addresses both the potential risks and obvious benefits of technologies like AI.
Evidence
OECD focus on regulatory experimentation as part of agile regulatory governance; recognition that AI regulatory attempts are still early stage
Major discussion point
Trust building and collaboration
Topics
Legal and regulatory
Moraes Thiago
Speech speed
137 words per minute
Speech length
983 words
Speech time
428 seconds
Civil society and individuals affected by AI solutions need meaningful roles before, during, and after sandbox implementation
Explanation
When sandboxing AI solutions, it’s important to consider that individuals will be affected regardless of whether their personal data is processed. Civil society and affected individuals should have meaningful participation throughout the entire sandbox process, not just as an afterthought.
Evidence
Current PhD research on the role of civil society in sandboxes; experience as practitioner at Brazilian Data Protection Authority working on pilot sandbox launch
Major discussion point
Stakeholder engagement and participation
Topics
Legal and regulatory | Human rights
Resource limitations can be addressed through pilot approaches, partnerships, and international cooperation
Explanation
When regulators face resource constraints, they can start by ‘sandboxing the sandbox’ through pilot programs. Partnerships with international institutions, development banks, and other organizations can provide funding and capacity building support to overcome resource limitations.
Evidence
Examples of institutions like CAF, CEPAL supporting sandbox initiatives; Brazilian Data Protection Authority’s pilot sandbox development; common practice of creating pilots before full sandboxes
Major discussion point
Implementation challenges and resource considerations
Topics
Legal and regulatory | Development
Agreed with
– Natalie Cohen
– Participant 1
– Alex Moltzau
Agreed on
Sandboxes require significant resources and careful planning to be successful
Exit report authorship varies between companies and regulators, with flexibility in approach depending on context
Explanation
Different regulators handle exit reports differently – some have companies draft them for internal use, while others like Norwegian EPA and ICO take the lead in drafting public reports. The approach depends on the regulator’s strategy and the intended use of the reports.
Evidence
Examples from Norwegian EPA and ICO where regulators drafted main exit reports; variation in practices across different jurisdictions; EU AI Act proposal allowing flexibility
Major discussion point
Documentation and knowledge sharing
Topics
Legal and regulatory
Disagreed with
– Alex Moltzau
Disagreed on
Exit report authorship and responsibility
Public exit reports focus on experience sharing rather than sensitive confidential information
Explanation
When exit reports are made public, they typically focus on sharing the sandbox experience and lessons learned rather than disclosing sensitive or confidential business information. This approach allows for knowledge sharing while protecting participant interests.
Evidence
Analysis of published exit reports showing focus on experience rather than sensitive information; regulatory assessment processes to ensure appropriate disclosure levels
Major discussion point
Documentation and knowledge sharing
Topics
Legal and regulatory
Participant 1
Speech speed
155 words per minute
Speech length
1038 words
Speech time
399 seconds
Funding challenges exist, with potential solutions including cost-sharing models between affected sectors
Explanation
African regulators are grappling with how to fund sandboxes, as core operational funding for experimentation is often not available. One solution being explored is cost-sharing models where multiple regulators from different sectors that would benefit from a sandbox contribute to its funding.
Evidence
25 national sandboxes identified in Africa with 24 in finance sector; co-creation activities in Africa exploring cost-sharing between regulators from different affected sectors
Major discussion point
Implementation challenges and resource considerations
Topics
Legal and regulatory | Development
Agreed with
– Natalie Cohen
– Moraes Thiago
– Alex Moltzau
Agreed on
Sandboxes require significant resources and careful planning to be successful
Legal backing for sandboxing authority is often unclear and needs to be established
Explanation
Many regulators want to establish sandboxes but are uncertain whether they have the legal authority to do so. This creates a challenge where regulators need to find legal backing for experimentation or work to establish such authority if it doesn’t exist.
Evidence
Feedback from co-creation labs showing regulators questioning their legal authority to sandbox; common challenge identified across multiple jurisdictions in Africa
Major discussion point
Implementation challenges and resource considerations
Topics
Legal and regulatory
Jai Ganesh Udayasankaran
Speech speed
151 words per minute
Speech length
906 words
Speech time
358 seconds
Health sector sandboxes address universal health coverage, interoperability standards, and cross-border data sharing needs
Explanation
In the health sector, sandboxes are being used to help private sector applications integrate into national mainstream systems, ensure interoperability with existing standards, and facilitate responsible cross-border health data sharing for medical tourism and treatment abroad.
Evidence
Asia eHealth Information Network representation from 15 countries with 2,600+ members across 84 countries; examples of universal health coverage programs using sandboxes; medical tourism data sharing needs
Major discussion point
Sector-specific applications and cross-border potential
Topics
Legal and regulatory | Development
Agreed with
– Alex Moltzau
– Jimson Olufuye
– Sophie Tomlinson
Agreed on
Cross-border and regional cooperation enhances sandbox effectiveness
Sandboxes should function as collaborative spaces with hand-holding support rather than just gatekeeping
Explanation
Instead of viewing sandboxes primarily as regulatory gatekeeping mechanisms, they should be seen as creative collaborative spaces that provide hand-holding support to entrepreneurs. This approach helps innovations meet regulatory requirements and expectations while fostering trust.
Evidence
Asia eHealth Information Network’s convergence methodology bringing various stakeholders together; universal resource constraints across jurisdictions
Major discussion point
Stakeholder engagement and participation
Topics
Legal and regulatory | Development
Agreed with
– Mariana Rozo-Pan
– Meni Anastasiadou
– Alex Moltzau
– Natalie Cohen
Agreed on
Sandboxes are collaborative spaces that bring together multiple stakeholders for experimentation
Disagreed with
– Natalie Cohen
Disagreed on
Primary purpose and framing of sandboxes
Sophie Tomlinson
Speech speed
156 words per minute
Speech length
2417 words
Speech time
926 seconds
Sandboxes are being explored globally as tools for digital policy challenges and new technologies
Explanation
Sandboxes for digital policy challenges and new technologies aren’t limited to Europe but are being explored worldwide. This includes implementations in Asia (notably Singapore), Latin America (Brazil, Chile), and Africa, demonstrating the global nature of this regulatory experimentation approach.
Evidence
DataSphere Initiative’s global mapping work; Africa Sandboxes Forum; co-creation lab in Abuja, Nigeria as part of African Data Protection Conference; Nigeria developing sandbox for data protection law compliance
Major discussion point
What are AI sandboxes and their potential applications
Topics
Legal and regulatory | Development
Agreed with
– Alex Moltzau
– Jimson Olufuye
– Jai Ganesh Udayasankaran
Agreed on
Cross-border and regional cooperation enhances sandbox effectiveness
The DataSphere Initiative provides comprehensive support for sandbox development including training and capacity building
Explanation
The DataSphere Initiative offers various forms of support for organizations wanting to develop sandboxes, including co-creation activities, one-on-one coaching journeys, and master classes. This support addresses the recognized need for sandboxes while helping navigate implementation challenges around resources and technical requirements.
Evidence
DataSphere Initiative’s work as think-do-tank on data governance and sandboxes; workshop series at IGF; QR code for interactive participation; diverse panel of speakers from multiple sectors and regions
Major discussion point
Implementation challenges and resource considerations
Topics
Legal and regulatory | Development
Audience
Speech speed
144 words per minute
Speech length
474 words
Speech time
196 seconds
Exit reports require careful consideration of authorship, roles, and documentation goals
Explanation
There are important questions about who should draft exit reports from sandboxes, what roles private companies should play in their creation, and how to balance documentation with proposing interpretations and paths forward. The concern is that drafting comprehensive reports will require significant work and clear authorization processes.
Evidence
Question from Brazil Youth Program facilitator about exit report documentation and authorization processes
Major discussion point
Documentation and knowledge sharing
Topics
Legal and regulatory
Countries need guidance on policy packages and integration approaches for AI and data sandboxes
Explanation
There are important design questions about whether AI and data sandboxes should be structured separately or integrated, and what legislative or regulatory features make participation more accessible to businesses, especially SMEs. This reflects the need for clearer frameworks on sandbox architecture and accessibility.
Evidence
Question from Institute for Policy Studies and Media Development in Vietnam about policy package design and SME accessibility
Major discussion point
Implementation challenges and resource considerations
Topics
Legal and regulatory | Economic
Agreements
Agreement points
Sandboxes are collaborative spaces that bring together multiple stakeholders for experimentation
Speakers
– Mariana Rozo-Pan
– Meni Anastasiadou
– Alex Moltzau
– Natalie Cohen
– Jai Ganesh Udayasankaran
Arguments
Sandboxes are collaborative spaces, safe spaces for collaboration in which, by nature, different stakeholders come together to craft solutions, experiment with technologies
Regulatory sandboxes are really a great tool that actually respond to this framework of governance and they can really enable the safe and real-world testing of AI systems
Sandboxes provide a collaborative space for building the evidence base needed to inform regulatory reform
Regulatory experimentation builds the evidence base for making regulatory reform in a way that can then inform and influence a regulatory regime
Sandboxes should function as collaborative spaces with hand-holding support rather than just gatekeeping
Summary
All speakers agree that sandboxes fundamentally serve as collaborative platforms where diverse stakeholders (public sector, private sector, civil society) come together to experiment with technologies and build evidence for regulatory decision-making
Topics
Legal and regulatory | Development
SMEs need special support and consideration in sandbox participation
Speakers
– Meni Anastasiadou
– Alex Moltzau
– Natalie Cohen
– Jimson Olufuye
Arguments
Regulatory sandboxes enable safe real-world testing of AI systems, particularly beneficial for SMEs
SME participation should be free according to EU AI Act provisions
Governments need to consider funding support for SMEs and avoid creating market distortions while providing participation incentives
Private sector coordination through organizations like AFICTA is essential for meaningful SME participation
Summary
There is strong consensus that small and medium enterprises face unique challenges in participating in sandboxes and require targeted support including free participation, funding assistance, and coordinated engagement through representative organizations
Topics
Legal and regulatory | Economic
Sandboxes require significant resources and careful planning to be successful
Speakers
– Natalie Cohen
– Moraes Thiago
– Participant 1
– Alex Moltzau
Arguments
Sandboxes require significant governance resources, clear eligibility criteria, testing frameworks, and exit strategies
Resource limitations can be addressed through pilot approaches, partnerships, and international cooperation
Funding challenges exist, with potential solutions including cost-sharing models between affected sectors
Exit reports are crucial for dissemination and getting value from sandbox investments
Summary
All speakers acknowledge that sandboxes are resource-intensive endeavors requiring careful planning, adequate funding, clear frameworks, and proper documentation to achieve their objectives
Topics
Legal and regulatory | Development
Cross-border and regional cooperation enhances sandbox effectiveness
Speakers
– Alex Moltzau
– Jimson Olufuye
– Jai Ganesh Udayasankaran
– Sophie Tomlinson
Arguments
Cross-border sandboxes can facilitate extensive collaboration on transport, health, and other sectors between regulatory environments
Regional cooperation is essential for products with countrywide and regional benefits
Health sector sandboxes address universal health coverage, interoperability standards, and cross-border data sharing needs
Sandboxes are being explored globally as tools for digital policy challenges and new technologies
Summary
Speakers agree that sandboxes become more effective when they operate across borders and regions, enabling broader collaboration and addressing shared challenges in sectors like health and transport
Topics
Legal and regulatory | Infrastructure | Development
Similar viewpoints
Both speakers emphasize the critical need for evidence-based approaches to AI regulation and the role of sandboxes in building this evidence base, particularly given low public trust in government regulation of new technologies
Speakers
– Alex Moltzau
– Natalie Cohen
Arguments
Responsible innovation requires evidence-based policy making, and sandboxes provide regulatory learning opportunities
Only 41% of countries trust governments to appropriately regulate new technologies, showing need for evidence-based collaboration
Topics
Legal and regulatory | Human rights
Both speakers advocate for more inclusive and supportive approaches to sandboxes that go beyond regulatory gatekeeping to provide meaningful participation opportunities for affected communities and entrepreneurs
Speakers
– Moraes Thiago
– Jai Ganesh Udayasankaran
Arguments
Civil society and individuals affected by AI solutions need meaningful roles before, during, and after sandbox implementation
Sandboxes should function as collaborative spaces with hand-holding support rather than just gatekeeping
Topics
Legal and regulatory | Human rights | Development
Both speakers from African contexts highlight the need for inclusive approaches to AI regulation and the practical challenges of establishing regulatory frameworks in developing economies
Speakers
– Jimson Olufuye
– Participant 1
Arguments
AI regulation must be people-centered and inclusive, with sandboxes helping bridge the digital divide
Legal backing for sandboxing authority is often unclear and needs to be established
Topics
Legal and regulatory | Development | Human rights
Unexpected consensus
Civil society participation in sandboxes
Speakers
– Moraes Thiago
– Alex Moltzau
– Jai Ganesh Udayasankaran
Arguments
Civil society and individuals affected by AI solutions need meaningful roles before, during, and after sandbox implementation
Exit reports are crucial for dissemination and getting value from sandbox investments
Sandboxes should function as collaborative spaces with hand-holding support rather than just gatekeeping
Explanation
Despite representing different regions and institutional perspectives (academic researcher, EU policy maker, Asian health network), there was unexpected alignment on the need for meaningful civil society engagement throughout the sandbox process, not just as beneficiaries but as active participants
Topics
Legal and regulatory | Human rights
Trust-building as a core function of sandboxes
Speakers
– Natalie Cohen
– Bertrand de la Chapelle
– Alex Moltzau
Arguments
Only 41% of countries trust governments to appropriately regulate new technologies, showing need for evidence-based collaboration
Sandboxes are one of the tools that brings the capacity of dialogue, particularly when the discussions are taking place very early on
Sandboxes help balance innovation with responsibility, ensuring products are both great and safe for citizens
Explanation
There was unexpected consensus across different institutional perspectives that sandboxes serve a crucial trust-building function between stakeholders, addressing broader societal mistrust in technology governance beyond just regulatory compliance
Topics
Legal and regulatory | Human rights
Overall assessment
Summary
The discussion revealed remarkably high consensus among speakers from diverse geographical and institutional backgrounds on fundamental aspects of AI sandboxes. Key areas of agreement included the collaborative nature of sandboxes, the need for special SME support, resource requirements, and the value of cross-border cooperation. There was also unexpected alignment on the importance of civil society participation and trust-building functions.
Consensus level
High consensus with strong implications for global sandbox development. The alignment suggests that despite different regulatory contexts, there are universal principles and challenges in sandbox implementation. This consensus provides a solid foundation for international cooperation and knowledge sharing in AI governance, while highlighting the need for coordinated approaches to address common challenges like resource constraints and stakeholder engagement.
Differences
Different viewpoints
Exit report authorship and responsibility
Speakers
– Moraes Thiago
– Alex Moltzau
Arguments
Exit report authorship varies between companies and regulators, with flexibility in approach depending on context
Exit reports are crucial for dissemination and getting value from sandbox investments
Summary
Thiago emphasizes flexibility in who drafts exit reports (companies vs regulators) with examples showing different approaches, while Alex focuses on the importance of exit reports for dissemination and stakeholder involvement, suggesting a more structured approach to maximize investment value
Topics
Legal and regulatory
Primary purpose and framing of sandboxes
Speakers
– Jai Ganesh Udayasankaran
– Natalie Cohen
Arguments
Sandboxes should function as collaborative spaces with hand-holding support rather than just gatekeeping
Governments need to consider funding support for SMEs and avoid creating market distortions while providing participation incentives
Summary
Jai advocates for sandboxes as supportive, collaborative environments that help entrepreneurs meet requirements, while Natalie emphasizes the need for careful balance to avoid market distortions and the resource-intensive nature of proper sandbox governance
Topics
Legal and regulatory | Development
Unexpected differences
Resource allocation and funding responsibility
Speakers
– Participant 1
– Natalie Cohen
– Moraes Thiago
Arguments
Funding challenges exist, with potential solutions including cost-sharing models between affected sectors
Governments need to consider funding support for SMEs and avoid creating market distortions while providing participation incentives
Resource limitations can be addressed through pilot approaches, partnerships, and international cooperation
Explanation
While all speakers acknowledge resource constraints, they propose different solutions that could potentially conflict. The African perspective suggests cost-sharing between regulators, OECD emphasizes government funding responsibility, and the Brazilian perspective focuses on international partnerships. This disagreement is unexpected because it reveals different regional approaches to the same fundamental challenge
Topics
Legal and regulatory | Development
Overall assessment
Summary
The discussion shows remarkable consensus on the value and potential of AI sandboxes, with disagreements primarily focused on implementation details rather than fundamental concepts. Key areas of disagreement include exit report management, the balance between support and market neutrality, and funding mechanisms.
Disagreement level
Low to moderate disagreement level. The speakers largely agree on goals but differ on methods and emphasis. This suggests a maturing field where practitioners are working through operational details rather than debating fundamental principles. The implications are positive – there’s broad consensus on the value of sandboxes, but more work is needed on standardizing best practices and addressing regional variations in implementation approaches.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers emphasize the critical need for evidence-based approaches to AI regulation and the role of sandboxes in building this evidence base, particularly given low public trust in government regulation of new technologies
Speakers
– Alex Moltzau
– Natalie Cohen
Arguments
Responsible innovation requires evidence-based policy making, and sandboxes provide regulatory learning opportunities
Only 41% of countries trust governments to appropriately regulate new technologies, showing need for evidence-based collaboration
Topics
Legal and regulatory | Human rights
Both speakers advocate for more inclusive and supportive approaches to sandboxes that go beyond regulatory gatekeeping to provide meaningful participation opportunities for affected communities and entrepreneurs
Speakers
– Moraes Thiago
– Jai Ganesh Udayasankaran
Arguments
Civil society and individuals affected by AI solutions need meaningful roles before, during, and after sandbox implementation
Sandboxes should function as collaborative spaces with hand-holding support rather than just gatekeeping
Topics
Legal and regulatory | Human rights | Development
Both speakers from African contexts highlight the need for inclusive approaches to AI regulation and the practical challenges of establishing regulatory frameworks in developing economies
Speakers
– Jimson Olufuye
– Participant 1
Arguments
AI regulation must be people-centered and inclusive, with sandboxes helping bridge the digital divide
Legal backing for sandboxing authority is often unclear and needs to be established
Topics
Legal and regulatory | Development | Human rights
Takeaways
Key takeaways
AI sandboxes are collaborative safe spaces that enable stakeholders to test technologies against regulatory frameworks, with over 150 implementations globally spanning multiple sectors beyond their fintech origins
Sandboxes serve as crucial tools for balancing innovation with responsibility, providing evidence-based regulatory learning while ensuring AI products are both innovative and safe for citizens
Successful sandbox implementation requires significant resources, clear governance structures, transparent eligibility criteria, and well-defined exit strategies
SME participation is critical and should be supported through free participation models and funding assistance to avoid market distortions while ensuring inclusive innovation
Civil society engagement is essential throughout the sandbox lifecycle, as individuals affected by AI solutions need meaningful representation before, during, and after implementation
Cross-border collaboration through sandboxes can address global AI governance challenges, particularly in sectors like health, transport, and data sharing
Trust-building between public authorities, private sector, and civil society is a fundamental benefit of sandboxes, addressing widespread mistrust in technology governance
Exit reports and knowledge dissemination are crucial for maximizing value from sandbox investments and informing broader regulatory frameworks
Resolutions and action items
EU AI Office to release draft Implementing Act for AI regulatory sandboxes for public comment in autumn
OECD to publish toolkit on sandbox development and design in coming weeks
Brazilian Data Protection Authority to launch pilot sandbox with news expected soon
DataSphere Initiative to continue offering one-on-one coaching, master classes, and co-creation activities for sandbox development
Participants encouraged to engage with EU’s public consultation process when the draft Implementing Act is released
Continued collaboration between DataSphere Initiative and Asia eHealth Information Network on health sector sandboxes
Unresolved issues
Whether AI and data sandboxes should be structured separately or integrated remains unclear
Legal backing for sandboxing authority is often unclear and needs to be established in many jurisdictions
Funding models and resource allocation strategies for sandboxes, particularly in developing countries, require further development
The specific role and meaningful participation mechanisms for civil society throughout the sandbox process need better definition
Standardization of exit report formats and authorship responsibilities across different regulatory contexts
How to effectively measure and evaluate the real-world impact and value creation of AI sandboxes
Balancing transparency requirements with protection of commercially sensitive information in sandbox operations
Suggested compromises
Cost-sharing models between different regulatory sectors that benefit from sandbox outcomes to address funding constraints
Pilot or ‘sandbox the sandbox’ approaches to test waters with limited resources before full implementation
Flexible duration models ranging from 3 months to 5 years depending on testing objectives and available resources
Hybrid sandbox models combining regulatory and operational elements to maximize utility
Free participation for SMEs while larger corporates contribute to funding sustainability
Collaborative exit report development between regulators and participants with appropriate confidentiality protections
Regional cooperation frameworks to share costs and benefits of cross-border sandbox initiatives
Thought provoking comments
We often forget how we used to play when we were kids. And as we were children growing up, we were actually quite excited about experimenting and about thinking about building things, building them, and then kind of destroying them and building something new again. And that flexible, agile mindset that maybe we had when we were children is what we’re often lacking when it comes to building agile regulations and shaping how we’re governing technologies.
Speaker
Mariana Rozo-Pan
Reason
This comment reframes regulatory innovation through a powerful metaphor that makes the abstract concept of sandboxes tangible and relatable. It challenges the traditional rigid approach to regulation by connecting it to universal human experience of creative play and experimentation.
Impact
This opening metaphor set the collaborative and experimental tone for the entire discussion. It influenced how other speakers framed their contributions, with many referring back to concepts of experimentation, collaboration, and safe spaces throughout the session.
Sometimes what SMEs need support with is part of accessing, could be accessing data, it could be legal and compliance resources as well… Some successful sandboxes have been successful in terms of testing products and services and bringing them to market, but they’ve been primarily successful with larger corporates.
Speaker
Natalie Cohen
Reason
This comment introduces a critical equity dimension to sandbox design, highlighting how these supposedly democratizing tools might actually reinforce existing power imbalances between large corporations and SMEs.
Impact
This observation shifted the discussion from purely technical implementation questions to broader questions of accessibility and fairness. It prompted subsequent speakers like Alex Moltzau to address how the EU AI Act specifically mandates free participation for SMEs, and influenced Jimson’s comments about the need for coordination and support.
Civil society and individuals, they might have an important role before, during and after the sandbox is done… We’re talking about individuals that are having their personal data processed or that will be affected by these AI solutions, regardless if the personal data has been processed or not.
Speaker
Moraes Thiago
Reason
This comment challenges the typical stakeholder model of sandboxes by highlighting a significant gap – the meaningful inclusion of those most affected by AI systems. It raises fundamental questions about democratic participation in technology governance.
Impact
This provocation introduced a new dimension to the conversation that hadn’t been adequately addressed. It influenced later discussions about trust-building and prompted Bertrand’s comment about mistrust between stakeholders, while also connecting to broader IGF themes about inclusive governance.
There are key words that we don’t dare to use, but that are very important in this discussion. One is mistrust… And we have to recognize that in the last 20 years, a huge amount of mistrust has grown between public authorities, private actors, and civil society. Sandboxes are one of the tools that brings the capacity of dialogue.
Speaker
Bertrand de la Chapelle
Reason
This comment directly addresses the elephant in the room – the underlying trust deficit that makes regulatory innovation necessary. By naming ‘mistrust’ and ‘anxiety’ as key but unspoken factors, it reframes sandboxes not just as technical tools but as trust-building mechanisms.
Impact
This intervention fundamentally shifted the conversation’s framing from technical implementation to the deeper social and political context. It prompted Natalie’s closing comment about the OECD trust survey showing only 41% of people trust governments to appropriately regulate new technologies, providing empirical support for Bertrand’s observation.
Why not look at the sandboxes in terms of being a creative or collaborative space where we actually help the entrepreneurs… rather than just being gatekeeping… why not we use this space as an environment where there is a bit of a hand-holding and support that comes from the regulators or the governments or the academy.
Speaker
Jai Ganesh Udayasankaran
Reason
This comment challenges the traditional regulatory paradigm by proposing a shift from gatekeeping to nurturing. It reframes the regulator-industry relationship from adversarial to collaborative, suggesting sandboxes as spaces for capacity building rather than just compliance testing.
Impact
This perspective added a constructive dimension to the discussion about regulatory approaches, moving beyond the compliance-focused view to consider how sandboxes could actively support innovation while ensuring responsible development.
We learned that it’s actually one thing that regulators grapple with. Sometimes it’s not clear that they actually are authorities allowed to sandbox in any way. So, how do they look to find that legal backing to carry out such an experimentation?
Speaker
Participant 1 (Maureen)
Reason
This comment reveals a fundamental practical barrier that challenges assumptions about regulatory authority and flexibility. It highlights how existing legal frameworks may not accommodate experimental approaches, creating a chicken-and-egg problem for regulatory innovation.
Impact
This observation grounded the discussion in practical realities facing regulators, particularly in developing countries. It influenced the conversation about resource constraints and the need for legal framework adaptation to support experimental governance approaches.
Overall assessment
These key comments collectively transformed what could have been a technical discussion about sandbox implementation into a nuanced exploration of the social, political, and structural challenges of regulatory innovation. The progression from Mariana’s playful metaphor through increasingly complex considerations of equity, inclusion, trust, and legal authority created a comprehensive framework for understanding sandboxes not just as tools, but as mechanisms for reimagining the relationship between innovation and governance. The comments built upon each other to reveal sandboxes as both promising solutions and reflections of deeper systemic challenges in technology governance, ultimately framing them as trust-building exercises in an era of widespread institutional skepticism.
Follow-up questions
What will be the role of civil society throughout all the sandbox experience – before, during and after the sandbox is done?
Speaker
Moraes Thiago
Explanation
This is identified as an important gap in current sandbox frameworks, especially when dealing with AI solutions that affect individuals and their personal data, requiring better inclusion of civil society voices
How do countries design policy packages and govern sandboxes? Should AI and data sandboxes be structured separately or integrated?
Speaker
Representative from Institute for Policy Studies and Media Development in Vietnam
Explanation
This addresses fundamental design questions about sandbox architecture and whether different technology domains should be handled together or separately
What types of legislative or regulatory features have proved the most effective in making participation more accessible to businesses and especially SMEs?
Speaker
Representative from Institute for Policy Studies and Media Development in Vietnam
Explanation
This focuses on practical implementation challenges and ensuring inclusive participation across different business sizes
How can cost-sharing models work between different regulators or sectors when setting up sandboxes?
Speaker
Maureen (Africa Sandboxes Forum Lead)
Explanation
This addresses resource constraints by exploring collaborative funding approaches when sandboxes affect multiple sectors or regulatory domains
What legal models can authorities use to establish sandboxes when they lack clear legal backing?
Speaker
Maureen (Africa Sandboxes Forum Lead)
Explanation
Many regulators want to establish sandboxes but are uncertain about their legal authority to do so, requiring clarification of legal frameworks
How can cross-border AI regulatory sandboxes be effectively implemented and what collaboration mechanisms are needed?
Speaker
Alex Moltzau
Explanation
This explores the potential for international cooperation through joint sandboxes, particularly relevant for AI systems that operate across borders
What are the specific roles and responsibilities of private companies versus public institutions in drafting exit reports and documentation?
Speaker
Giovanna (Brazil Youth Program)
Explanation
This addresses practical implementation questions about documentation responsibilities and ensuring proper knowledge transfer from sandbox experiences
How can sandboxes be used as collaborative spaces for hand-holding and support rather than just gatekeeping?
Speaker
Jai Ganesh Udayasankaran
Explanation
This suggests a shift in sandbox philosophy from regulatory compliance checking to more supportive innovation facilitation
How can sandboxes help build trust at a cross-border level between different stakeholders?
Speaker
Sophie Tomlinson
Explanation
Given low trust levels in government regulation of new technologies (only 41% according to OECD), understanding how sandboxes can build international trust is crucial
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.