Smart Regulation Rightsizing Governance for the AI Revolution

20 Feb 2026 17:00h - 18:00h

Smart Regulation Rightsizing Governance for the AI Revolution

Session at a glance

Summary

This discussion focused on designing governance frameworks for an AI-driven world, with particular emphasis on ensuring equitable access to AI resources for smaller and developing nations. The panel, moderated by Sabina Chofu from TechUK, brought together experts from Chatham House, Mozilla, NASCOM, and Cohere to explore international cooperation challenges and opportunities.


Bella Wilkinson from Chatham House provided a realistic assessment of the current geopolitical landscape, arguing that global consensus on AI governance is unlikely given US-China tensions and weakened multilateral institutions. However, she emphasized that partial alignment on specific issues through coalition-building remains possible and pragmatic. The discussion highlighted significant barriers facing emerging economies, including limited access to compute resources, data silos, infrastructure gaps in power and connectivity, and skills shortages.


Rafik Rikorian from Mozilla advocated for open-source solutions as a path forward, drawing parallels to the Linux model where countries could contribute to shared infrastructure while maintaining sovereignty through local fine-tuning. He proposed alternative architectures like federated learning and data trusts that would enable international collaboration without requiring countries to surrender their data.


The panelists identified several promising areas for cooperation, including technical standards through frameworks like NIST and ISO, shared risk mitigation practices, and interoperability of resources. Examples discussed included Southeast Asian multilingual models, regional compute consortiums, and public-private data sharing initiatives. Halak Shirastava emphasized the importance of capacity building through shared evidence and procurement policies that open markets to global players.


The conversation concluded with optimism about increasing participation and convergence in AI governance standards over the next 12 months, despite acknowledging the significant challenges ahead.


Keypoints

Major Discussion Points:

Global AI Governance Challenges: The panel discussed the realistic limitations of achieving global consensus on AI governance in the current geopolitical environment, with Bella Wilkinson emphasizing that while complete alignment is unlikely, partial alignment on priority issues through coalition-building is possible and more pragmatic than traditional multilateral approaches.


AI Divide and Access Barriers: Rajesh Nambia highlighted the emerging “AI divide” that will be significantly larger than the previous digital divide, focusing on critical barriers for developing nations including limited access to compute resources, data quality and organization issues, infrastructure gaps (power, connectivity), and skills shortages.


Open Source as a Solution Framework: Rafik Rikorian advocated for open source models as a key mechanism for international cooperation, drawing parallels to Linux’s success and proposing that shared infrastructure with local fine-tuning could provide digital sovereignty while enabling global collaboration.


Standards and Technical Cooperation: The discussion emphasized the importance of technical standards (NIST, ISO frameworks), shared risk mitigation practices, and interoperability of resources as practical areas where international alignment is achievable, particularly benefiting smaller companies and emerging economies.


Capacity Building and Implementation: The panel addressed translating international cooperation into actual capabilities for emerging economies, emphasizing the need for shared evidence, procurement policies, sectoral governance approaches, and talent development in both technical and regulatory domains.


Overall Purpose:

The discussion aimed to explore practical approaches to AI governance in a multipolar world, focusing on how to create equitable access to AI resources and capabilities for smaller and developing nations through international cooperation, shared infrastructure, and coalition-building rather than traditional multilateral frameworks.


Overall Tone:

The discussion began with a notably realistic and somewhat pessimistic assessment of global cooperation challenges, but progressively became more optimistic and solution-oriented. The moderator explicitly noted this shift, with panelists building on each other’s ideas to present concrete examples of successful cooperation models, open source solutions, and practical implementation strategies. The tone evolved from acknowledging significant barriers to emphasizing actionable opportunities and expressing genuine excitement about progress in the coming months.


Speakers

Speakers from the provided list:


Sabina Chofu – International Policy and Strategy Lead at TechUK (sister association of NASCOM in the UK)


Bella Wilkinson – Research Fellow on the Digital Society Program at Chatham House


Rajesh Nambia – President of NASCOM (National Association of Software and Service Companies in India)


Rafik Rikorian – Chief Technology Officer for Mozilla


Halak Shirastava – Global AI and Public Policy and Regulatory Affairs at Cohere (Canadian AI developer)


Audience – Unidentified audience member who asked a question during the discussion


Additional speakers:


Navreena Singh – From Credo AI (mentioned as unable to attend due to a meeting with the president)


Full session report

This panel discussion at an international AI summit examined practical approaches to AI governance and international cooperation, with particular focus on addressing barriers facing developing nations in AI adoption. The conversation, moderated by Sabina Chofu from TechUK, brought together experts from policy research, technology development, industry associations, and private sector AI companies. The discussion took place on the final day of the summit, with speakers maintaining an notably optimistic and solution-focused tone despite acknowledging significant challenges.


Reframing Global AI Governance: From Multilateral Idealism to Coalition Building

Bella Wilkinson from Chatham House opened the discussion by challenging conventional approaches to global AI governance. She argued that comprehensive multilateral cooperation on AI is fundamentally unrealistic given current geopolitical realities, including the accelerating US-China AI race and what she described as the unprecedented degradation of international institutions since World War II. The intense uncertainty surrounding frontier AI capabilities further complicates traditional diplomatic approaches.


However, rather than adopting a pessimistic stance, Wilkinson proposed a pragmatic alternative: coalition building around specific priority issues that could later be scaled through multilateral formats. This approach would focus on sovereignty and strategic autonomy messaging, allowing resource-constrained countries to adopt common approaches to data governance and pool resources while maintaining alignment on specific issues. The key insight was that collective benefits must massively outweigh what countries could achieve individually to make such coalitions viable.


This reframing proved influential throughout the discussion, with other speakers building on the coalition-building framework and focusing on practical cooperation mechanisms rather than idealistic global agreements.


Understanding the AI Divide: Beyond Digital Access

Rajesh Nambia from NASCOM provided a comprehensive analysis of what he termed the “AI divide” – a gap that he argued would be significantly larger than the previous digital divide. His analysis distinguished between mere access to technology and genuine agency in shaping it, emphasizing that the AI divide fundamentally concerns countries’ ability to maintain sovereignty and self-determination in an AI-driven world.


Nambia outlined multiple interconnected barriers facing developing nations. Compute access remains severely limited and expensive, with meaningful AI development requiring GPU clusters that are cost-prohibitive even when adjusted for purchasing power parity. Data organization presents another critical challenge, with developing countries often having siloed data across government departments, leading to poor representation in AI training datasets.


Infrastructure gaps in power and connectivity create additional burdens, while skills shortages exist not just in AI development but crucially in AI governance itself. Nambia emphasized the need for talent development in both technical capabilities and regulatory understanding, particularly for government officials who must oversee AI systems without necessarily understanding their potential applications and harms.


Importantly, Nambia advocated for an innovation-first approach to governance, arguing that while regulation is necessary, countries seeking meaningful participation in the AI ecosystem must prioritize developing capabilities over implementing restrictive regulations that could stifle the very innovation they need to avoid being left behind.


Open Source Models and Collaborative Infrastructure

Rafik Rikorian from Mozilla provided concrete examples of how international cooperation could work through open source approaches. Drawing parallels to Linux, where virtually every computer globally runs on shared code while allowing diverse implementations, Rikorian proposed that AI could follow similar collaborative models. This would enable countries to contribute to common infrastructure while maintaining sovereignty through local adaptation and fine-tuning.


Rikorian discussed federated learning as an example, referencing Google’s handwriting recognition training across Android devices, where training occurs locally while only model weights are shared centrally. This approach could enable international collaboration on healthcare or climate research without requiring countries to surrender sensitive data across borders.


He also described data trust models, citing examples of Hawaiian communities creating data collectives for genomic information used in pharmaceutical research, and radio stations in the Pacific creating similar collaborative structures. Mozilla’s Data Collaborative represents an attempt to create more ethical approaches to data sourcing that ensure attribution and compensation for data providers.


Rikorian mentioned research suggesting significant potential savings from switching to open source models, though he acknowledged the economist’s name was unclear from his notes, indicating this was preliminary information rather than definitive analysis.


Industry Perspectives on Standards and Interoperability

Halak Shirastava from Cohere brought a private sector perspective emphasizing the practical importance of technical standards and interoperability. She argued that frameworks like NIST and ISO are particularly valuable for startups and smaller companies because of their flexibility and evolutionary nature, contrasting favorably with rigid country-specific regulations that could exclude smaller players from the market.


Shirastava identified three key areas for international alignment: technical standards providing flexible compliance frameworks, shared practices around risk mitigation enabling companies to learn from each other’s experiences, and interoperability of shared resources supporting the entire AI ecosystem from large technology companies to emerging startups.


Her approach to capacity building went beyond traditional training programs to emphasize shared evidence, performance benchmarks, and cross-border procurement policy networks. She argued that emerging economies need substantive support rather than superficial engagement to participate meaningfully in AI development, including access to real performance data and implementation experiences rather than just workshops and presentations.


Practical Cooperation Models and Examples

The discussion identified several promising examples of international cooperation that could serve as templates for broader collaboration. Regional compute consortiums, such as India’s AI mission cluster shared between government, academia, and industry, demonstrate how countries can pool resources while maintaining local control. Cloud credit programs negotiated with hyperscalers provide emerging economies with access to necessary computational resources.


Bella Wilkinson mentioned language preservation and development initiatives in Southeast Asia as examples of how countries can collaborate on shared challenges while maintaining cultural sovereignty. These projects demonstrate how multilingual AI development can balance global collaboration with local adaptation needs.


Data sharing initiatives between government, academia, and industry within countries provide models for broader international cooperation, showing how different sectors can collaborate on common datasets and infrastructure while maintaining appropriate governance and oversight.


Sectoral Approaches and Implementation Challenges

The discussion revealed consensus around the importance of sector-specific governance approaches. Nambia emphasized that meaningful AI governance must recognize that potential harms in healthcare differ fundamentally from those in financial services or education. This sectoral approach requires deep understanding of specific applications and use cases rather than broad horizontal regulations that may not address real-world implementation challenges.


The conversation also highlighted the dual challenge of developing both technical AI capabilities and governance expertise simultaneously. Countries need regulatory talent that understands both the capabilities and limitations of AI systems, presenting particular challenges for nations with limited existing AI expertise.


Audience Engagement and Transparency Questions

During the question period, an audience member raised concerns about transparency and accountability in AI systems, referencing recent discussions about document releases and public access to information. The panel acknowledged these concerns while noting that transparency requirements must be balanced with practical implementation needs and competitive considerations.


Moderator Sabina Chofu noted throughout the discussion that speakers were maintaining a notably positive and solution-focused tone, which she attributed to the collaborative spirit of the summit’s final day and the practical focus on actionable cooperation mechanisms rather than abstract policy debates.


Future Directions and Optimistic Outlook

Despite acknowledging significant challenges, the discussion concluded on an optimistic note about increasing participation in AI development and governance. Shirastava noted growing excitement about AI development among both companies and countries, suggesting that community participation will continue expanding rather than contracting.


The economic arguments for collaborative approaches, including potential cost savings from open source adoption and shared infrastructure, suggest that practical considerations may drive adoption of more cooperative models. As the economics of AI development become clearer, countries may find that collaborative approaches offer better value than attempting to develop capabilities independently.


The panel successfully reframed AI governance from idealistic multilateral aspirations toward pragmatic coalition-building focused on specific technical and resource-sharing mechanisms. The convergence of speakers from different sectors around similar solutions suggests these approaches have broad stakeholder support and could represent viable paths forward for more inclusive AI governance.


However, significant implementation challenges remain, particularly around scaling coalition-building approaches beyond major economic powers, developing sufficient governance talent in countries with limited technical expertise, and balancing sovereignty concerns with the need for international cooperation. The discussion provided a foundation for addressing these challenges through practical, incremental cooperation rather than comprehensive global agreements.


Session transcript

Sabina Chofu

about this morning is right -signing governance for an AI -driven world. So what we’ll try to do with a pretty excellent panel, as I’m sure you’ll agree, is talk a bit about shared computes and data initiatives that hopefully give all nations access to AI resources. We’ll look a bit at how to up -level the playing field for smaller and developing nations. And we’ll talk about collaboration in key sectors like healthcare and education and climate resilience. I’ve got a perfect panel to do that with. I’m going to introduce them all first, and then we’ll dive straight into the conversation. So unfortunately, Navreena Singh from Credo AI couldn’t be with us this morning. She’s got a meeting with the president, so she’s excused.

But we do have… What I start with, just next to me here, Bella Wilkinson, who’s a research fellow on the Digital Society Program. with the Chatham House. Next to her is Rafik Rikorian. I hope I’ve pronounced that vaguely okay, who is the Chief Technology Officer for Mozilla. Next to him, we’ve got Rajesh Nambia, who is the President of NASCOM, our sister association here in India. And last but not least, we’ve got Halak Shirastava, managed, who’s Global AI and Public Policy and Regulatory Affairs at Cohere. And for those of you who don’t know me, I’m Sabina Chofu, I’m International Policy and Strategy Lead at TechUK. So we are the sister association of NASCOM back in the UK.

So without further ado, we will start with setting a bit of a global context, and who better to do that than Isabella. So from a kind of geopolitical perspective, how realistic, I guess, is alignment on AI governance across countries with… fair to say very different strategic interests right now. And where do you see maybe multilateral institutions? I know multilateralism is not a very popular theme these days, but where do you see multilateral institutions or maybe other international players playing a role in this space? So over to you.

Bella Wilkinson

Thank you, Sabina. Thanks to my fellow speakers. It’s great to be here today, really keeping the energy up on the final day of the summit. We can all do it. Let me answer your question directly and then perhaps elaborate a little bit more in detail. Global consensus on how to govern AI is a no -go. It is not going to happen in this geopolitical environment. However, partial alignment on priority issue areas is possible, and it’s pragmatic to throw our weight behind these smaller gatherings that we can then scale using the multilateral format. Now, let’s take a second. Let’s take a second to sketch out the state of play. We have some great experts in the room, on the panel, so I won’t spend too long doing this.

We have been absolutely covered in really optimistic summit rhetoric, walking into Bharat Mandapam, going to side events over the course of this week. But despite the optimism outside of these walls in the background, the US -China AI race continues to accelerate to the umpteenth degree. The capabilities of advanced and the most frontier AI systems and models, the little we know about their capabilities, mind, with huge gaps in transparency, continue to advance. And global scientists only recently have issued warnings about the state of the science and the intense uncertainty surrounding these capabilities and the impact they might be having on our communities and societies. well it’s a good thing we have strong international institutions and shared values we don’t you know it’s a really difficult time for global cooperation outside of ai you know we’re seeing i would argue since the second world war an unprecedented degradation of the international organizations the shared values the rule of law that we have all held so dearly so suffice to say it’s a difficult time for global governance it’s difficult time for the global governance of ai now institutions in the past have very much been brokers mediators and scalers of consensus on tricky governance issues and some of the governance problems we’re facing today are pretty old right i mean i’ve encountered them in previous roles at chatham house and other areas of tech i’m sure the experts on our panel have come across them and the core governance puzzle that we need to figure out is this taking into account the state of geopolitics, the uncertainty around the state of the science, the market dynamics mediated by these leading labs and intensely, intensely competitive US and Chinese ARS dynamics, how on earth do we bring rivals and competitors around the same table?

How do we bring states with a nominal or a minimal alignment of interests and incentives into the same room? Now, you started by asking me about multilateralism and institutions, but maybe let’s reframe this and talk about coalitions. In other areas of governance, what we’ve seen is intense coalition building in crisis or unstable settings around a trusted mechanism, a trusted approach, perhaps in the absence of shared values and principles. And what I’m really interested in, in the context of AI, is where coalition building can develop trust around a credible governance approach, adopt a state champion, get support from associations, from builders, from leading labs themselves, and then scale it using the multilateral format. And over the past few days, I’ve been really excited by some of this splintering to scale dynamics that I’ve seen maybe in conversations on verification, on -chip hardware, risk mitigation strategies, even anonymized collection of usage data, which came out of the commitments yesterday.

Now, what’s the messaging that can drive this coalition building in the absence of trusted institutions, in the absence of shared values? I’ll get into this later in my remarks, but I think it has to be sovereignty and strategic autonomy. Resource -constrained countries. who might decide to adopt a common data governance approach, who might decide to pool resources like compute, have to also consider a degree of governance alignment, again, at this low -hanging fruit, in order to not only withstand the dynamics of the AI race, but to ensure that the collective benefits of cooperation and governance alignment massively outweigh anything they could do individually. So I think I’ll leave it there. Slightly pessimistic take. Let’s see if there’s some more optimism on the

Sabina Chofu

Thank you so much, Bella. I don’t think it was that pessimistic. You did kind of, I think you made it sound very pragmatic in terms of, look, the world is not what we want it to be, and there isn’t the level of multilateral cooperation that we maybe used to have. But you have talked about coalition building, and it’s probably the best we can hope for in the world as it is, as opposed to the world as we’d like it right now. And Rajesh, can I turn to you next? For emergence. economy, obviously access to compute data and infrastructure are critical, but what do you see as some of the barriers most pressing, but also maybe opportunities for AI adoption in India and beyond?

Over to you.

Rajesh Nambia

First of all, thank you for having me on the panel. Pleased to be with all of you and then a few of you showed up here as well, so thank you for coming up. We wish this was the Modi inauguration last evening, very a little bit more than this crowd, but nevertheless, we’ll do with this. But you know, I believe we used to talk about digital divide for a long period of time, and I think while that had its own puts and takes, when you compare a smaller economy and smaller country with a larger one and so on, I think the AI divide is going to be much, much bigger than the digital divide which we saw, because the biggest difference is that at least in the digital divide, the you know access and so on whereas this is all about agency and then it can completely put you at a different back foot so it is such an important topic to talk about when you talk about the broader you know haves and have nots and what really goes on with the larger smaller economies and so on and I truly believe that the accessibility when you look at the broader scales it will come across multiple things starting with compute one of the largest you know piece of what we are talking about here right I think as you mentioned in terms of the race between the US and China and so on and so forth but if you leave those two countries then of course we have a big drop in terms of where the real access is going to be and I believe totally that you know the continued limited access to the broader compute facility is going to be undue putting some of these smaller countries, especially the developing ones, into a little bit of a disadvantage.

So, I think there’s a lot that can be done around it in terms of saying, you know, what is that, you know, countries can potentially do in terms of pooling and so on. But I think there is certainly an issue when it comes to compute. And, you know, not just in terms of accessibility, but also in terms of expense and so on, because at the end of the day, all of these are, even if you use the purchasing power parity, and then sort of look at what it costs for people to sort of get into the kind of level of GPUs, potentially, or GPU clusters one has to produce to even have a meaningful language model and so on.

I think that’s going to be a very different ballgame. And the second element of this whole broader issue that we’re talking about is also the data and then the organization of data, availability of data, quality of data, and so on. I think the more you get into the development, world you will find our developing world you will find that the the data itself is very siloed in many ways there are you know different state silos different department silos and so on and it gets into a point where the data which is such an important and integral part of everything to do with AI you will end up having the data which gets fed into the broader models and eventually the AI systems will necessarily not have the right representation of that population which is a huge concern I mean even especially when you you know of course India is slightly luckier in many ways in terms of us you know playing that game a little bit you know punching a little bit above our weight in some sense but but when you when you go down on the on the list of countries which do not have access to all of these I think you’re going to find it even even harder in terms of solving the data issue and the data availability data quality all of that this becomes a bigger issue and there when we talked about infrastructure gap compute gap it’s a little bit more than just the pure compute itself gpus and so on but it’s also about connectivity power uh these are the issues which uh you know we somehow take it for granted in other segments but i think you will find that power is going to be a huge uh foundation for all of that and as you know that there are multiple layers in in building any of the ai systems and one of the uh bottom most layer is going to be power and then you know what really happens to the power and if it has to be clean power then you know does it put additional tax on the on the developing world for for making sure that that power comes out clean connectivity is a huge issue even though it’s kind of broadly solved in in some sense with all the um satellite options and so on but we continue to have the kind of connectivity you need to run a truly inclusive ai system is going to be very different from those uh you know people have thought otherwise and then of course we can go on and on in terms of the the other layers of the power and then of course we can go on and on issue, the availability of skills and ensuring that you have the right skills not just to leverage AI but also to build AI, I mean there are two different type of capabilities that you need to produce in any country so these are the issues and how do you make sure that you have a broader the opportunity itself would be to sort of look at this and say are there other ways of collaborating other ways of partnering and so on, because you know these especially when you go down the line, list of countries we have close to 200 countries or so in the world and when you leave the top 5 or 10 and then you go below and then you keep going down the list, it becomes harder I mean I don’t think that everybody is going to be producing a full blown, large language model and things that they need to sort of do it for themselves at that point in time the question will be can you really partner, can you really leverage some of the common systems that can be done across these countries and so on and so on

Sabina Chofu

Thank you. I mean, you’ve done a brilliant job of putting all the free problems we’ve got and then saying you’ve got a long list afterwards in terms of cooperation. But I love the touch of optimism there at the end. It’s like, you know, if you lift a country out of the room, you still have a hundred and whatever, 85 that need to figure it out. So I liked a lot of that framing. And thanks for touching on

Rafik Rikorian

I mean, unsurprisingly, being someone from Mozilla, I’ll probably go with the open source angle as one of the opportunities to actually align the talent, align the capabilities, and actually do shared infrastructure. I mean, maybe I’ll draw two analogies to think about, and then we can go more deep into those as it applies to AI. But for all practical purposes, every computer on the planet runs Linux. There are a few iPhones here and there on top of it. But the Linux model, I think, is a good one for all of us to think about, that every computer… Every country, every nation in the world, almost every company in the world, contributes to the single code base which has been deployed across these billions of computing devices across the planet.

And there are lots of derivative work that happens from it. So like a company like Google can then take that and make it into Android. A company like a vending machine company can deploy Linux onto a Raspberry Pi and run inside their vending machine. So I think there’s an analogy here of being able to use shared infrastructure, shared software infrastructure as a collaboration mechanism that we can all pool resources together but still have sovereignty on top of it. So we can still all be contributing to this common core but then fine -tune our way to our own particular implementations. And I think that if we take that and then marry it with a web analogy of in the early 90s of the original web, you needed to ask for permission in order to deploy a website.

And by permission I mean effectively you had to go buy yourself a Solaris box and then you had to go buy yourself you had to buy yourself a Windows NT. server, you’re trying to configure an ActiveX scenario. And the beauty of what Mozilla and Firefox did, we’re not the only ones who did it, but the beauty of what they did there is a forced openness throughout the stack that enabled anyone without permission to build whatever they wanted. And I think we need to find a similar moment. So in that world, we went from the Windows NT stack and all of IIS to the LAMP stack. And the LAMP stack has these gorgeous analogies of just like anyone can build on Linux.

When Facebook needed PHP to move faster, they did massive improvements on PHP, which then trickled down to all of us. So people can contribute in different ways across it. That’s not the world we’re currently living in AI. We’re living in this world where there are a few frontier model companies that are effectively doing governance for all of us in some way, shape, or form. And I agree with my colleague that that’s an untenable situation. I do live in San Francisco, but you don’t want four people in San Francisco. government’s decisions for the entire world that doesn’t make a lot of sense. So I do think if we can find the LAMP stack equivalent model for AI, and this is actually what I’ve turned all of Mozilla towards, of just like how do we define open standards, how do we define open interfaces so that the vibrancy of the open source community can come together and actually build solutions that work for every single person, every single community, every single government on the planet.

You can sort of build upon, you can contribute to the common base, but then build upon it and take it in a way that makes it more aligned with your country’s values or your company’s values or your individual values, and you can fine -tune your solution out of that. So I think there is an analogy here around how open source could actually provide digital sovereignty across all the different levels. Give us agency as a person, give opportunities for flexibility at a corporation level, and then give. Give countries the ability to own their version of the stack. That could actually be quite beautiful if we can actually figure out how to do that in an appropriate way.

Sabina Chofu

I tried to give you a dose of optimism you have given me a dose of optimism but I’m absolutely shocked you talked about open source thanks so much Ralphie and I did appreciate you brought up the standards because I’m going to talk to Halak and we’re going to go a bit into collaboration and standards here so obviously with the myriad of AI governance frameworks I’m going to turn to you on the question of where do you see potential for alignment on standards maybe some interoperability some maybe risk management framework so keep us on the hopeful path please

Halak Shirastava

I am here to provide the hopeful perspective let me start out by saying that I lead global public policy at Cohere. Cohere is a Canadian AI developer we build models and we have agentic AI our solution is called North so in my role I look across the global regulatory framework that means If our startup wants to, you know, do business in a certain country, I try to understand the regulatory landscape of that country, and then I advise our company if it’s favorable or not. When we’re talking about governance and frameworks that are existing, my perspective is I think it’s not there yet, but I have a more promising view of it. I think that in certain principles, we are converging to where we need to go, and there are strong opportunities.

Technical standards is one of them. You know, there are frameworks like NIST and ISO frameworks. For startups, these are key. The reason they’re key is because they’re flexible and they’re evolving. If we just go country by country, what that’s going to do is price out smaller companies. But if we have an international framework that is evolving and flexible and, you know, we’re going to be able to do that, you know, also including industry coalition, which a lot of the model developers are a part of. But also, like, other stakeholders can be a part of as well. I think it really helps. The second thing I would say is around shared practices, around risk mitigation. So I think there’s strong opportunity there as we come together and share documents or, you know, evaluations around misuse or model capabilities or impact of models.

I think, you know, like I said, we have a way to go, but we are moving closer to that. And then the third thing I would say is interoperability of shared resources. This is key, key, key. We have a big ecosystem. So, yes, there is big tech involved, but there are smaller players. And every single day there’s new startups that are wanting to emerge and wanting to have a go -to -market strategy. And the only way this is possible is if industry and all of industry, big and small, the whole ecosystem starts sharing documents and documentation around, you know, red teaming or evals or multilingual benchmarks and things like that to come to some sort of consensus.

Sabina Chofu

Thanks so much. I’m really enjoying this positive vibe we’re going with. And, you know, that combination, I think it kind of links really nicely back to what Bella was saying around coalitions, you know, build on themes, right? It’s like where do we think we have common ground and what we think we can build on. So I really, really enjoyed that contribution. Rajesh, can I turn to you next? Because I did wonder what all this stuff means for, you know, in kind of smaller and developing economies. And maybe if you have any examples of shared standards, pooled resources, any of the stuff that Halak was talking about, public -private models, or anything that you’ve seen that looks promising, that looks like it could deliver.

Thank you.

Rajesh Nambia

You know, as we said, the moment you look at shade models, there are multiple reasons why we want to do this. And one, of course, as we’ve talked about, the cost involved in doing some of that. I think that itself is becoming cost prohibitive and hence there may not be even an option for many of the countries but to sort of have this shade model. We also find that in the regional compute consortiums that, you know, folks can potentially create and you often see examples of where, for example, a standard data set and stuff like that being shared by, not just by, you know, even within a country. It could be between government, academia, and then industry sort of sharing the same sort of data sets, making sure that they’re able to leverage that in some sense.

Compute is clearly something which continues. continues to be the, you know, we shared resources in many of the, even in India, for example, you know, our own AI mission has created this cluster where it can be broadly leveraged by both industry, academia, and the government in terms of ensuring that they’re able to get access to the right set of GPU, set of GPU, GPU forms, and ensuring that they’re able to use that and then take it forward. So, public -private sharing of data, certainly the compute consortium, and then cloud credits, I think that’s something which sovereigns have been able to work with the hyperscalers, especially in terms of getting a lot of, you know, cloud credits for the GPUs, especially, right, because, which is needed for even if you, it’s not about building a frontier model, but it’s even to leverage the frontier model, build some reasoning models on top of it, and ensuring that you’re able to build an application which is meaningful, not that every time you need a powerful GPU, but there are occasions where you definitely you would need and then hence you know using some of those cloud credits will become a big need and then of course when you switch to regulations and so on and ensuring that how do you make sure that even having a policy is something which is shared across you know you don’t want to reinvent the wheel every single time so do you have a method by which you could leverage the existing you know look at what is out there in the world and then sort of leverage it and then try to reuse it because what you don’t want is to have this 100 versions of the same thing with a few nuances here and there so that’s something which I think companies will try and create a model as well.

Sabina Chofu

Thank you so much and I’m gonna kind of turn over to

Audience

Yes. Yes. Looking forward to a truth transparency and accountability -driven world. It takes 30 years for FC files to come out in a place like America, the developed world. Is that the speed of the system till it collapses and till we start a new world? Are we resigned to that fate?

Sabina Chofu

Yeah, so I can’t really see the link between the Epstein files and the… 30 years since the world was destroyed by Aaron Mulder in 2001. You don’t do the truth to come out. So you don’t have the system speed. Yes. Sure. Thank you. So on… Just to kind of build on what Rajesh was saying there on kind of also the capability. So maybe if we move into a bit of cross -border cooperation and Bella, if I can maybe turn to you just to build on those points. Because obviously what we are seeing across the developing world in particular, often it’s kind of the institutional capacity that’s a bit of an issue there. Yeah. kind of doing all the engagement and all the investments and all the, you know, you kind of still run into.

What are some of the, and I saw you were taking notes furiously, so I’m sure you have reflections on what has been said so far. But also, what are some of the resources

Bella Wilkinson

dependencies, figure out what they want to invest in and what dependencies they’re willing to accept, wanting to build strong institutions, again, that can mainline AI directly into public service delivery, and as you said, enable cross -border cooperation, might take a step back and figure out which foreign capabilities or foreign services they’re willing to accept at some levels of the stack and where they’d like to invest in indigenous solutions. And I mentioned open source earlier because this has come up time and time again, and I’m sure it’s going to be absolutely no surprise to our audience here today. An example which has really stuck with me and Rafi, I’d be really interested in your thoughts on this, is the Southeast Asian Languages Under One Network model, so a multilingual sea lion LLM.

And this is something we’ve called, again, in a really interesting collaboration with AI Safety Asia, open models. With local adaptation, really balancing. again inputs from open source models potentially provided by foreign providers with adaptation to a local context and so i think leaving the summit what i’m really going to be interested in is i think this connection between drawing on i guess inputs from the open source community fine -tuning and locally adapting their contributions and then perhaps doing so not only in the service of again strong robust institutions at the national level who are ai ready but also on this kind of collective cross -border level i hope that makes sense

Sabina Chofu

it does and i’m gonna let rafi kind of uh fit into that as well because you’ve uh you’ve uh segwayed really nicely into into his uh part but also um if you can also touch upon feel free to react to what uh bella has said but also if you can also touch upon on on the what you’ve seen as best practice in international and cross -border collaboration maybe in healthcare climate resilience audience education anywhere you’ve seen good stories to tell please do share

Rafik Rikorian

i mean i do think a lot about the local fine -tuning and i think that that’s actually a really powerful concept of like we can all contribute to a core and then locally fine -tune for our values and our needs and i think that this has shown up in a bunch of different ways and i’m interested personally in all these alternative i don’t even want to call it our alternative but like other architectures that enable this to be possible because in some ways we’re kind of being being fed a regime that says it’s not possible but i think like architecturally it actually is in a bunch of different ways so i love the indigenous data model like looking at what different indigenous peoples have done around data collectives for their local areas so there’s a group of people for example in hawaii that is doing this for their genomic data because genomic data is really useful for pharmaceutical models and so like they’ve been looking for ways so that they can both monetize but also the provenance of their data as it goes through these pharmaceutical models.

So there are some professors out of UCSD starting to build actually what these data trusts could look like for Hawaiian people, and I think that that model could be replicated in lots of different parts of the world. Mozilla is actually attempting to do a bunch of this. So we’re creating something that we call the Mozilla Data Collaborative, or Collective, sorry. And what the Collective is meant to be is it’s meant to be a marketplace of ethically sourced but provenance -traced data sets so that you can bring your data. It will actually help you scrub it, clean it, et cetera, and also make sure you have the appropriate licenses on it so that people can come find the data sets that they want to train their models but make sure that attribution is given, compensation is given, et cetera.

So we’re literally in conversations with almost every radio station on the planet to try to get their recordings and their transcripts onto the marketplace, not for Mozilla to make money. In fact, we actually want the radio stations. to have a monetization path for all the data that they’re sitting on. simply have it scraped by big model providers to try to soak that into their systems. Instead, require that it be licensed, require that compensation be given. So I think there are models there. And on the computational side, I think there’s also a lot of interesting things showing up around federated learning opportunities. For those of you who don’t know what federated learning is, think of, you know, Google did this very famously when they trained their handwriting model across everyone’s Android phones.

So your handwriting is very personal and private. Your handwriting is on your device. And Google is able to train a handwriting recognition model that didn’t require them to get access to your data, because part of the training happened on your phone, and then the model wait through shipped it back up for centralized training. And I think something like that actually could be an interesting model for international collaboration of like, I can bring my data to the game, my healthcare data, my values data, my language data, but not have to release it to a different company, or sorry, a different country. Instead, allow you to do it in a different way. And I think that’s a really interesting model.

Thank you. of the training on my compute, on my infrastructure, and only ship model weights back up, and actually then create bigger models across borders and across geographies that could actually take into account different healthcare scenarios, different value systems, et cetera, in there. So I think that there are these interesting alternative architectures that we can actually start leaning into, these data trust models, these federated learning models, that actually could be massive enablers for cooperation and allow us to build these foundational things that we can then fine -tune and bring to our local context.

Sabina Chofu

Thanks so much, Rafi. That fine -tuning seems to be definitely a thing in this conversation, how you kind of built for different cultures and countries. And Halak, maybe I can come to you next, because we keep talking about kind of international cooperation and coordination. But I’m wondering, how do you translate that? played that, you know, chit -chat into actual skills, capacity, capability for emerging economies. And, you know, I mean, we are in a very international AI Impact Summit. So, you know, kind of how do we bring that from we talk about governance to all this international policy actually delivering for emerging economies?

Halak Shirastava

It’s a good question. I think let me start out by saying capacity building isn’t just, you know, running workshops or basically talking to regulators about, you know, this should be done. What it is is capacity building, I think, for emerging economies especially is critical because – hold on. Let me think. Okay, so emerging economies have unequal access to data, information, and technology, right? So what are we trying to solve for here in terms of capacity building? The first thing I would say is shared evidence. So what we need is we need players to help into this capacity building system with documents, results, performance, benchmarks, to lift up other players. That, I think, would be number one.

The second thing I think is key and sometimes overlooked is the value of, like, procurement policies. And I agree with Isabella. What if we had, like, an industry coalition, like a cross -border network, where they’re solving for procurement policies or procurement rules? And what this does is this brings in global players. So now what you’re doing is you’re opening up your country to different markets. The next thing I would say is, like, you know, a lot of – Let me think. Let me put it this way. So there are developers who develop the technology, and then there are deployers. They buy the technology, and they use it. So, for example, like a public sector agency.

Why is it so – Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. you you Thank you. Thank you. economist Frank Nagel has a report recently that approximately 24 billion U.S. dollars are being wasted by not switching to open source models right now. So the economics are starting to make a lot of sense. So I think once all these stars align, it becomes almost obvious what an answer could look like for local governments around open source AI models, et cetera. So I’m really excited for that in the next 12 to 18 months.

Sabina Chofu

Thank you. Rajesh?

Rajesh Nambia

No, I agree with both of what’s been said so far, but I also want to give a sense of I think it’s when you look at AI governance and people tend to sort of lead with regulatory regulation first. I believe that countries and especially the countries which we talked about in terms of more from an inclusion point of view, you’ve got to lead with innovation first mindset because I think regulation is required and certainly needed, but I think innovation is probably needed more in some sense. And also when you look at the AI governance, governance across all of that. we do while there could be horizontal governance which will apply to every AI systems I think the more meaningful governance that you’re going to find when you get into sectoral governance meaning when you look at the AI systems for health care and you’ll find there are the understanding of a harm in the health care segment is very different from financial services and so on so how do you get into those sectoral areas then you can have meaningful governance structure and last but not the least you need to have the right talent and people who can actually who understand all of this in both in public sector and people who are supposedly governing all of this that is something which is it’s not the talent in terms of broader AI model building and people who are building AI systems but how do you make sure that the talent in the governance space in the governments and people who are actually regulating it they don’t understand the real harm and then it’s going to be a bigger issue and especially when it comes to the you know the list of countries that we talked about always when you get deeper down the list you will find that the talent is going to be here and there and you will find that the talent is going to be here and there and you will find that be here and there and you will find that the talent is going to be here and there and you will find that the talent is going to be here and there and you will find that the talent issue in terms of understanding.

Sabina Chofu

Thank you. And, you know, as someone who lives in Brussels, I’ll make sure to take that message back. Halak.

Halak Shirastava

Okay, so what am I most excited about, I guess, in the next 12 months? I mean, in the last few days, you’ve seen companies really, really excited about AI, but what you’ve also seen is countries very excited about AI. So what does this mean in governance? It means that the community and the participation is only going to increase. I don’t see it going backwards. And so, as technology is evolving, more players are going to have a voice in the system, and the standards and the ITU bodies or the ISO bodies, and I think because of this convergence, we are going to, as society, just, like, increase our, like, literacy of not only AI, but technology, but also bring it into whatever we’re in, if we’re in the private sector, if we’re in the public sector.

And because of that, I think we’re going to have to Yeah, I think a lot of progress will be made in the next 12 months, and you’ll see it as it converges.

Sabina Chofu

Thank you so much. Thanks to all the panel. Thanks for being here, and enjoy the rest of your day. Thank you. Thank you.

B

Bella Wilkinson

Speech speed

155 words per minute

Speech length

979 words

Speech time

377 seconds

Partial Alignment via Coalitions

Explanation

Bella argues that achieving full global consensus on AI governance is unrealistic, but forming issue‑specific coalitions can still produce meaningful alignment and shared benefits.


Evidence

“In other areas of governance, what we’ve seen is intense coalition building in crisis or unstable settings around a trusted mechanism, a trusted approach, perhaps in the absence of shared values and principles.” [43]. “who might decide to adopt a common data governance approach, who might decide to pool resources like compute, have to also consider a degree of governance alignment, again, at this low‑hanging fruit, in order to not only withstand the dynamics of the AI race, but to ensure that the collective benefits of cooperation and governance alignment massively outweigh anything they could do individually.” [41].


Major discussion point

Global AI Governance Feasibility & Coalition Building


Topics

Artificial intelligence | The enabling environment for digital development


Sovereignty & Strategic Autonomy as Drivers

Explanation

She notes that resource‑constrained countries can be motivated to join coalitions when the benefits are framed in terms of preserving sovereignty and strategic autonomy over AI capabilities.


Evidence

“dependencies, figure out what they want to invest in and what dependencies they’re willing to accept, wanting to build strong institutions, again, that can mainline AI directly into public service delivery, and as you said, enable cross‑border cooperation, might take a step back and figure out which foreign capabilities or foreign services they’re willing to accept at some levels of the stack and where they’d like to invest in indigenous solutions.” [14].


Major discussion point

Global AI Governance Feasibility & Coalition Building


Topics

Artificial intelligence | The enabling environment for digital development


Institutional Strengthening and Indigenous vs. Foreign Solutions

Explanation

Bella stresses that strong national institutions must decide which parts of the AI stack to source externally and which to develop indigenously, balancing openness with local control.


Evidence

“dependencies, figure out what they want to invest in and what dependencies they’re willing to accept, wanting to build strong institutions, again, that can mainline AI directly into public service delivery, and as you said, enable cross‑border cooperation, might take a step back and figure out which foreign capabilities or foreign services they’re willing to accept at some levels of the stack and where they’d like to invest in indigenous solutions.” [14].


Major discussion point

Capacity Building and Policy Implementation for Emerging Economies


Topics

Artificial intelligence | The enabling environment for digital development


S

Sabina Chofu

Speech speed

142 words per minute

Speech length

1209 words

Speech time

508 seconds

Coalition Building as Pragmatic Path

Explanation

Sabina points out that, given the erosion of multilateral cooperation, building focused coalitions is the most viable way to translate AI governance ideas into concrete outcomes for emerging economies.


Evidence

“So, you know, kind of how do we bring that from we talk about governance to all this international policy actually delivering for emerging economies?” [2].


Major discussion point

Global AI Governance Feasibility & Coalition Building


Topics

Artificial intelligence | The enabling environment for digital development


R

Rajesh Nambia

Speech speed

195 words per minute

Speech length

1953 words

Speech time

598 seconds

Compute, Data, Infrastructure, and Skills Gaps

Explanation

Rajesh highlights the multifaceted AI divide in developing nations, emphasizing shortages in high‑performance compute, fragmented low‑quality data, unreliable power/connectivity, and scarce skilled talent.


Evidence

“And the second element of this whole broader issue that we’re talking about is also the data and then the organization of data, availability of data, quality of data, and so on.” [36].


Major discussion point

Barriers and Opportunities for AI Adoption in Developing Economies


Topics

Artificial intelligence | Closing all digital divides


Regional Consortia, Cloud Credits, and Public‑Private Data Sharing

Explanation

He proposes practical remedies such as regional compute consortia, cloud‑credit programmes, and collaborative data sharing between government, academia, and industry to bridge the AI gap.


Evidence

“We also find that in the regional compute consortiums that, you know, folks can potentially create and you often see examples of where, for example, a standard data set and stuff like that being shared by, not just by, you know, even within a country.” [34]. “It could be between government, academia, and then industry sort of sharing the same sort of data sets, making sure that they’re able to leverage that in some sense.” [28].


Major discussion point

Barriers and Opportunities for AI Adoption in Developing Economies


Topics

Artificial intelligence | The enabling environment for digital development


Innovation‑First, Sector‑Specific Governance

Explanation

Rajesh argues that emerging economies should prioritize innovation and adopt sector‑specific regulatory frameworks (e.g., health, finance) before imposing broad, one‑size‑fits‑all AI regulations.


Evidence

“I believe that countries and especially the countries which we talked about in terms of more from an inclusion point of view, you’ve got to lead with innovation first mindset because I think regulation is required and certainly needed, but I think innovation is probably needed more in some sense.” [1]. “we do while there could be horizontal governance which will apply to every AI systems I think the more meaningful governance that you’re going to find when you get into sectoral governance meaning when you look at the AI systems for health care and you’ll find there are the understanding of a harm in the health care segment is very different from financial services and so on so how do you get into those sectoral areas then you can have meaningful governance structure and last but not the least you need to have the right talent and people who can actually who understand all of this in both in public sector and people who are supposedly governing all of this that is something which is it’s not the talent in terms of broader AI model building and people who are building AI systems but how do you make sure that the talent in the governance space in the governments and people who are actually regulating it they don’t understand the real harm and then it’s going to be a bigger issue and especially when it comes to the you know the list of countries that we talked about always when you get deeper down the list you will find that the talent is going to be here and there and you will find that the talent is going to be here and there and you will find that be here and there and you will find that the talent is going to be here and there and you will find that the talent is going to be here and there and you will find that the talent issue in terms of understanding.” [5].


Major discussion point

Barriers and Opportunities for AI Adoption in Developing Economies


Topics

Artificial intelligence | Capacity development


Talent Development for Governance

Explanation

He emphasizes that beyond technical AI expertise, emerging economies need policymakers and regulators who understand sector‑specific harms to implement effective AI governance.


Evidence

“we do while there could be horizontal governance which will apply to every AI systems I think the more meaningful governance that you’re going to find when you get into sectoral governance meaning when you look at the AI systems for health care and you’ll find there are the understanding of a harm in the health care segment is very different from financial services and so on so how do you get into those sectoral areas then you can have meaningful governance structure and last but not the least you need to have the right talent and people who can actually who understand all of this in both in public sector and people who are supposedly governing all of this that is something which is it’s not the talent in terms of broader AI model building and people who are building AI systems but how do you make sure that the talent in the governance space in the governments and people who are actually regulating it they don’t understand the real harm and then it’s going to be a bigger issue and especially when it comes to the you know the list of countries that we talked about always when you get deeper down the list you will find that the talent is going to be here and there and you will find that the talent is going to be here and there and you will find that be here and there and you will find that the talent is going to be here and there and you will find that the talent is going to be here and there and you will find that the talent issue in terms of understanding.” [5].


Major discussion point

Capacity Building and Policy Implementation for Emerging Economies


Topics

Capacity development | Artificial intelligence


R

Rafik Rikorian

Speech speed

189 words per minute

Speech length

1391 words

Speech time

439 seconds

Open‑Source Stack as Digital Sovereignty

Explanation

Rafik likens an open‑source AI stack to the Linux/LAMP model, arguing it can provide a common foundation while allowing each nation to customize for local values and needs.


Evidence

“So I do think if we can find the LAMP stack equivalent model for AI, and this is actually what I’ve turned all of Mozilla towards, of just like how do we define open standards, how do we define open interfaces so that the vibrancy of the open source community can come together and actually build solutions that work for every single person, every single community, every single government on the planet.” [25].


Major discussion point

Open‑Source Models, Standards, and Shared Infrastructure


Topics

Artificial intelligence | The enabling environment for digital development


Data Trusts for Ethical Data Sharing

Explanation

He proposes data‑trust models that ensure provenance, licensing, and compensation, citing indigenous data collectives as a concrete example.


Evidence

“i mean i do think a lot about the local fine‑tuning and i think that that’s actually a really powerful concept of like we can all contribute to a core and then locally fine‑tune for our values and our needs and i think that this has shown up in a bunch of different ways and i’m interested personally in all these alternative i don’t even want to call it our alternative but like other architectures that enable this to be possible because in some ways we’re kind of being being fed a regime that says it’s not possible but i think like architecturally it actually is in a bunch of different ways so i love the indigenous data model like looking at what different indigenous peoples have done around data collectives for their local areas so there’s a group of people for example in hawaii that is doing this for their genomic data because genomic data is really useful for pharmaceutical models and so like they’ve been looking for ways so that they can both monetize but also the provenance of their data as it goes through these pharmaceutical models.” [30]. “And what the Collective is meant to be is it’s meant to be a marketplace of ethically sourced but provenance‑traced data sets so that you can bring your data.” [31].


Major discussion point

Collaborative Models & Data Governance Mechanisms


Topics

Data governance | Artificial intelligence


Federated Learning for Cross‑Border Collaboration

Explanation

Rafik suggests federated learning as a technical architecture that lets models be trained on distributed data without moving the raw data, preserving privacy and national control.


Evidence

“So I think that there are these interesting alternative architectures that we can actually start leaning into, these data trust models, these federated learning models, that actually could be massive enablers for cooperation and allow us to build these foundational things that we can then fine‑tune and bring to our local context.” [29].


Major discussion point

Collaborative Models & Data Governance Mechanisms


Topics

Data governance | Artificial intelligence


H

Halak Shirastava

Speech speed

69 words per minute

Speech length

931 words

Speech time

798 seconds

Technical Standards (NIST, ISO) as Flexible Frameworks

Explanation

Halak argues that evolving technical standards such as NIST and ISO provide a flexible, internationally recognised baseline that helps startups navigate diverse regulatory environments.


Evidence

“You know, there are frameworks like NIST and ISO frameworks.” [16]. “Technical standards is one of them.” [17].


Major discussion point

Open‑Source Models, Standards, and Shared Infrastructure


Topics

Artificial intelligence | The enabling environment for digital development


Interoperability of Shared Resources

Explanation

He stresses that interoperable tools, benchmarks, and evaluation documents—shared openly across big and small players—are essential for a cohesive AI ecosystem.


Evidence

“And the only way this is possible is if industry and all of industry, big and small, the whole ecosystem starts sharing documents and documentation around, you know, red teaming or evals or multilingual benchmarks and things like that to come to some sort of consensus.” [23]. “So I think there’s strong opportunity there as we come together and share documents or, you know, evaluations around misuse or model capabilities or impact of models.” [37].


Major discussion point

Open‑Source Models, Standards, and Shared Infrastructure


Topics

Artificial intelligence | Data governance


Evidence Sharing and Procurement Coalitions

Explanation

Halak highlights that capacity building should include shared performance evidence, benchmarks, and coordinated procurement networks to lower entry barriers for emerging markets.


Evidence

“The second thing I would say is around shared practices, around risk mitigation.” [15]. “And the only way this is possible is if industry and all of industry, big and small, the whole ecosystem starts sharing documents and documentation around, you know, red teaming or evals or multilingual benchmarks and things like that to come to some sort of consensus.” [23].


Major discussion point

Capacity Building and Policy Implementation for Emerging Economies


Topics

Artificial intelligence | Capacity development


A

Audience

Speech speed

122 words per minute

Speech length

53 words

Speech time

26 seconds

Call for Truth, Transparency and Accountability

Explanation

The audience emphasizes the need for an AI ecosystem that is open and answerable, insisting that future governance frameworks must embed clear mechanisms for truth‑telling, transparency and accountability to build public trust.


Evidence

“Looking forward to a truth transparency and accountability -driven world.” [2].


Major discussion point

AI Governance & Accountability


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Concern Over Sluggish Regulatory Processes

Explanation

Audience members point out that current policy and filing cycles are excessively slow, warning that such delays could undermine system resilience and precipitate a collapse before reforms take effect.


Evidence

“It takes 30 years for FC files to come out in a place like America, the developed world.” [4]. “Is that the speed of the system till it collapses and till we start a new world?” [5].


Major discussion point

Regulatory Timeliness & Systemic Resilience


Topics

Artificial intelligence


Skepticism About Accepting AI Fate

Explanation

The audience questions whether societies are resigned to a predetermined AI future, urging policymakers to adopt proactive, rather than fatalistic, approaches to governance.


Evidence

“Are we resigned to that fate?” [3].


Major discussion point

Perceived Inevitability & Need for Proactive Governance


Topics

Artificial intelligence


Agreements

Agreement points

Open source models and local fine-tuning enable countries to maintain sovereignty while leveraging global AI capabilities

Speakers

– Bella Wilkinson
– Rafik Rikorian

Arguments

Local fine-tuning of global models allows countries to adapt AI systems to their specific values and contexts


Open source models like Linux provide a template for shared AI infrastructure where countries can contribute to common code base while maintaining sovereignty


Summary

Both speakers agree that open source approaches combined with local adaptation allow countries to benefit from global AI development while maintaining control over their specific implementations and values


Topics

Artificial intelligence | The enabling environment for digital development


International technical standards are preferable to fragmented national regulations for AI governance

Speakers

– Halak Shirastava
– Rajesh Nambia

Arguments

Technical standards frameworks like NIST and ISO are key for startups because they’re flexible and evolving, unlike country-by-country regulations that price out smaller companies


Countries should lead with innovation-first mindset rather than regulation-first, and focus on sectoral governance for meaningful AI oversight


Summary

Both speakers favor flexible international standards over rigid national regulations, emphasizing that innovation should lead regulation and that sectoral approaches are more meaningful than horizontal governance


Topics

Artificial intelligence | The enabling environment for digital development


Shared resources and collaboration models are essential for addressing AI access barriers in developing countries

Speakers

– Rajesh Nambia
– Rafik Rikorian
– Halak Shirastava

Arguments

Regional compute consortiums, shared datasets between government-academia-industry, and cloud credits from hyperscalers show promising collaboration approaches


Data trusts and federated learning models enable international collaboration while keeping sensitive data local and ensuring proper attribution and compensation


Shared practices around risk mitigation and interoperability of resources are essential for the entire AI ecosystem


Summary

All three speakers agree that various forms of resource sharing – from compute consortiums to data trusts to shared practices – are necessary to enable broader participation in AI development


Topics

Artificial intelligence | Financial mechanisms | Capacity development


Coalition building around specific issues is more realistic than global consensus on AI governance

Speakers

– Bella Wilkinson
– Sabina Chofu

Arguments

Global consensus on AI governance is unrealistic in current geopolitical environment, but partial alignment on priority issues is possible through coalition building


Coalition building represents a pragmatic approach to AI governance in the current geopolitical environment, offering a realistic alternative to traditional multilateral cooperation


Summary

Both speakers acknowledge that while comprehensive global AI governance is unrealistic, focused coalitions around specific issues offer a pragmatic path forward


Topics

Artificial intelligence | The enabling environment for digital development


Similar viewpoints

Both speakers recognize that developing economies face significant structural disadvantages in AI access and that meaningful capacity building requires substantive resource sharing rather than superficial training approaches

Speakers

– Rajesh Nambia
– Halak Shirastava

Arguments

The AI divide will be much bigger than the digital divide, creating significant disadvantages for smaller and developing economies


Capacity building requires shared evidence, performance benchmarks, and cross-border procurement policy networks rather than just workshops


Topics

Artificial intelligence | Closing all digital divides | Capacity development


Both speakers are concerned about concentration of power in AI governance and advocate for more distributed, inclusive approaches that enable broader participation from smaller players

Speakers

– Rafik Rikorian
– Halak Shirastava

Arguments

Current AI governance is effectively being done by a few frontier model companies, which is an untenable situation for global decision-making


Technical standards frameworks like NIST and ISO are key for startups because they’re flexible and evolving, unlike country-by-country regulations that price out smaller companies


Topics

Artificial intelligence | The enabling environment for digital development


All three speakers support models that combine global collaboration with local adaptation, allowing countries and organizations to benefit from shared resources while maintaining control over their specific implementations

Speakers

– Bella Wilkinson
– Rafik Rikorian
– Halak Shirastava

Arguments

Local fine-tuning of global models allows countries to adapt AI systems to their specific values and contexts


Open source models like Linux provide a template for shared AI infrastructure where countries can contribute to common code base while maintaining sovereignty


Shared practices around risk mitigation and interoperability of resources are essential for the entire AI ecosystem


Topics

Artificial intelligence | The enabling environment for digital development | Data governance


Unexpected consensus

Innovation should precede regulation in AI governance

Speakers

– Rajesh Nambia
– Halak Shirastava

Arguments

Countries should lead with innovation-first mindset rather than regulation-first, and focus on sectoral governance for meaningful AI oversight


Technical standards frameworks like NIST and ISO are key for startups because they’re flexible and evolving, unlike country-by-country regulations that price out smaller companies


Explanation

This consensus is unexpected because it goes against the common policy approach of establishing regulatory frameworks before allowing innovation to proceed. Both speakers from different backgrounds agree that in AI, innovation should lead and regulation should follow with flexible, sector-specific approaches


Topics

Artificial intelligence | The enabling environment for digital development


Open source approaches can address both sovereignty and inclusion concerns simultaneously

Speakers

– Bella Wilkinson
– Rafik Rikorian

Arguments

Local fine-tuning of global models allows countries to adapt AI systems to their specific values and contexts


Open source models like Linux provide a template for shared AI infrastructure where countries can contribute to common code base while maintaining sovereignty


Explanation

This consensus is unexpected because sovereignty and inclusion are often seen as competing priorities – countries wanting to maintain control versus wanting to participate in global systems. Both speakers show how open source models can satisfy both needs simultaneously


Topics

Artificial intelligence | The enabling environment for digital development | Closing all digital divides


Overall assessment

Summary

The speakers demonstrated strong consensus around pragmatic, collaborative approaches to AI governance that balance global cooperation with local sovereignty. Key areas of agreement include the value of open source models with local adaptation, the preference for flexible international standards over rigid national regulations, the necessity of resource sharing mechanisms for developing countries, and the realistic focus on coalition building rather than seeking global consensus.


Consensus level

High level of consensus on practical approaches, with speakers from different sectors (policy research, technology, industry association, private sector) converging on similar solutions. This suggests these approaches have broad stakeholder support and could be viable paths forward for international AI cooperation. The consensus implies that successful AI governance will likely emerge from bottom-up coalition building around specific technical and resource-sharing mechanisms rather than top-down multilateral agreements.


Differences

Different viewpoints

Approach to AI governance – regulation vs innovation priority

Speakers

– Rajesh Nambia
– Halak Shirastava

Arguments

Countries should lead with innovation-first mindset rather than regulation-first, and focus on sectoral governance for meaningful AI oversight


Technical standards frameworks like NIST and ISO are key for startups because they’re flexible and evolving, unlike country-by-country regulations that price out smaller companies


Summary

Nambia advocates for innovation-first approaches with sectoral governance, while Shirastava emphasizes the importance of international technical standards and frameworks for regulatory compliance


Topics

Artificial intelligence | The enabling environment for digital development


Scale of governance approach – sectoral vs horizontal

Speakers

– Rajesh Nambia
– Halak Shirastava

Arguments

Countries should lead with innovation-first mindset rather than regulation-first, and focus on sectoral governance for meaningful AI oversight


Technical standards frameworks like NIST and ISO are key for startups because they’re flexible and evolving, unlike country-by-country regulations that price out smaller companies


Summary

Nambia argues for sector-specific governance approaches noting that harm understanding varies across sectors, while Shirastava focuses on horizontal international standards that work across sectors


Topics

Artificial intelligence | The enabling environment for digital development


Unexpected differences

Role of current AI companies in governance

Speakers

– Rafik Rikorian
– Halak Shirastava

Arguments

Current AI governance is effectively being done by a few frontier model companies, which is an untenable situation for global decision-making


Increasing participation from both companies and countries will drive convergence in standards and improve overall AI literacy across sectors


Explanation

Unexpected because both work in the AI industry but have opposing views – Rikorian sees current company involvement in governance as problematic concentration of power, while Shirastava views increasing company participation as positive for standards convergence


Topics

Artificial intelligence | The enabling environment for digital development


Overall assessment

Summary

The discussion showed relatively low levels of fundamental disagreement, with most speakers aligned on core challenges and the need for international cooperation. Main disagreements centered on implementation approaches rather than goals.


Disagreement level

Low to moderate disagreement level. The speakers generally agreed on the problems (AI divides, need for cooperation, capacity building) but differed on solutions and priorities. This suggests good potential for finding common ground, as the disagreements are more tactical than strategic. The consensus on challenges combined with diverse solution approaches could actually strengthen policy development by providing multiple pathways forward.


Partial agreements

Partial agreements

All speakers agree on the need for international cooperation and shared resources in AI development, but disagree on the mechanisms – Wilkinson emphasizes coalition building around sovereignty messaging, Rikorian advocates for open source models as the primary vehicle, while Shirastava focuses on technical standards and industry collaboration

Speakers

– Bella Wilkinson
– Rafik Rikorian
– Halak Shirastava

Arguments

Coalition building around sovereignty and strategic autonomy messaging can drive cooperation where collective benefits outweigh individual capabilities


Open source models like Linux provide a template for shared AI infrastructure where countries can contribute to common code base while maintaining sovereignty


Shared practices around risk mitigation and interoperability of resources are essential for the entire AI ecosystem


Topics

Artificial intelligence | The enabling environment for digital development


Both agree on the importance of local adaptation and regional cooperation, but Wilkinson emphasizes fine-tuning global models for local contexts while Nambia focuses more on practical resource-sharing mechanisms like compute consortiums and cloud credits

Speakers

– Bella Wilkinson
– Rajesh Nambia

Arguments

Local fine-tuning of global models allows countries to adapt AI systems to their specific values and contexts


Regional compute consortiums, shared datasets between government-academia-industry, and cloud credits from hyperscalers show promising collaboration approaches


Topics

Artificial intelligence | Capacity development | Financial mechanisms


Both recognize the critical importance of capacity building, but Nambia emphasizes the need for governance talent in government sectors while Shirastava focuses on substantive evidence sharing and procurement policy coordination

Speakers

– Rajesh Nambia
– Halak Shirastava

Arguments

Talent development in governance and regulatory understanding is critical, especially for countries with limited AI expertise


Capacity building requires shared evidence, performance benchmarks, and cross-border procurement policy networks rather than just workshops


Topics

Artificial intelligence | Capacity development


Similar viewpoints

Both speakers recognize that developing economies face significant structural disadvantages in AI access and that meaningful capacity building requires substantive resource sharing rather than superficial training approaches

Speakers

– Rajesh Nambia
– Halak Shirastava

Arguments

The AI divide will be much bigger than the digital divide, creating significant disadvantages for smaller and developing economies


Capacity building requires shared evidence, performance benchmarks, and cross-border procurement policy networks rather than just workshops


Topics

Artificial intelligence | Closing all digital divides | Capacity development


Both speakers are concerned about concentration of power in AI governance and advocate for more distributed, inclusive approaches that enable broader participation from smaller players

Speakers

– Rafik Rikorian
– Halak Shirastava

Arguments

Current AI governance is effectively being done by a few frontier model companies, which is an untenable situation for global decision-making


Technical standards frameworks like NIST and ISO are key for startups because they’re flexible and evolving, unlike country-by-country regulations that price out smaller companies


Topics

Artificial intelligence | The enabling environment for digital development


All three speakers support models that combine global collaboration with local adaptation, allowing countries and organizations to benefit from shared resources while maintaining control over their specific implementations

Speakers

– Bella Wilkinson
– Rafik Rikorian
– Halak Shirastava

Arguments

Local fine-tuning of global models allows countries to adapt AI systems to their specific values and contexts


Open source models like Linux provide a template for shared AI infrastructure where countries can contribute to common code base while maintaining sovereignty


Shared practices around risk mitigation and interoperability of resources are essential for the entire AI ecosystem


Topics

Artificial intelligence | The enabling environment for digital development | Data governance


Takeaways

Key takeaways

Global consensus on AI governance is unrealistic in the current geopolitical environment, but coalition building around specific priority areas offers a pragmatic path forward


The AI divide will be significantly larger than the digital divide, creating substantial disadvantages for smaller and developing economies due to barriers in compute access, data quality, infrastructure, and skills


Open source models and collaborative frameworks (similar to Linux) can provide shared AI infrastructure while allowing countries to maintain sovereignty through local fine-tuning and adaptation


Technical standards frameworks like NIST and ISO are more practical for international cooperation than country-by-country regulations, especially for startups and smaller players


Successful cooperation models include regional compute consortiums, shared datasets between sectors, federated learning, data trusts, and cloud credit programs


Capacity building requires shared evidence and benchmarks rather than just workshops, and countries should prioritize innovation-first approaches over regulation-first mindsets


Sectoral governance (healthcare, finance, etc.) is more meaningful than horizontal governance across all AI systems due to different harm profiles


Talent development in AI governance and regulatory understanding is critical, particularly for developing nations with limited expertise


Resolutions and action items

Mozilla is creating a Data Collaborative marketplace for ethically sourced, provenance-traced datasets to enable fair compensation and attribution


Mozilla is pursuing conversations with radio stations globally to license their recordings and transcripts rather than allowing free scraping


Industry players should contribute shared evidence, performance benchmarks, and documentation to lift up other players in the ecosystem


Development of cross-border procurement policy networks to open markets to global players


Focus on building technical standards through international frameworks rather than fragmented national regulations


Unresolved issues

How to effectively scale coalition-building approaches to include the majority of the world’s ~200 countries beyond the top 5-10 economies


Specific mechanisms for ensuring equitable access to compute resources and addressing the growing AI divide


How to balance sovereignty concerns with the need for international cooperation and shared resources


Methods for ensuring adequate representation of developing world populations in AI training data


Practical implementation of federated learning and data trust models at scale


How to develop sufficient AI governance talent in countries with limited technical expertise


Addressing the fundamental tension between innovation-first and regulation-first approaches across different national contexts


Suggested compromises

Coalition building around specific technical areas (verification, chip hardware, risk mitigation) rather than seeking comprehensive global consensus


Shared infrastructure models where countries contribute to common AI foundations but maintain sovereignty through local fine-tuning


Flexible technical standards frameworks that evolve with industry input rather than rigid national regulations


Federated learning approaches that allow international collaboration while keeping sensitive data within national borders


Data trust models that enable monetization and attribution for data providers while allowing broader access for AI development


Sectoral governance approaches that recognize different risk profiles across industries rather than one-size-fits-all regulations


Public-private partnerships for sharing compute resources, datasets, and cloud credits to reduce barriers for developing economies


Thought provoking comments

Global consensus on how to govern AI is a no-go. It is not going to happen in this geopolitical environment. However, partial alignment on priority issue areas is possible, and it’s pragmatic to throw our weight behind these smaller gatherings that we can then scale using the multilateral format.

Speaker

Bella Wilkinson


Reason

This comment is deeply insightful because it cuts through the optimistic summit rhetoric to present a stark geopolitical reality. Wilkinson acknowledges the fundamental constraints of current international relations while offering a pragmatic alternative – coalition building around specific issues rather than comprehensive global agreements. This reframes the entire governance discussion from idealistic to realistic.


Impact

This comment set the pragmatic tone for the entire discussion. It shifted the conversation away from broad multilateral aspirations toward practical coalition-building strategies. Subsequent speakers built on this framework, with Rajesh discussing regional consortiums and shared resources, and others focusing on specific technical standards rather than comprehensive governance frameworks.


I think the AI divide is going to be much, much bigger than the digital divide which we saw, because the biggest difference is that at least in the digital divide, the access and so on whereas this is all about agency and then it can completely put you at a different back foot.

Speaker

Rajesh Nambia


Reason

This observation is profound because it distinguishes between mere access (digital divide) and fundamental agency (AI divide). Nambia identifies that AI isn’t just about having technology – it’s about having the power to shape and control it. This insight elevates the discussion beyond technical infrastructure to questions of sovereignty and self-determination.


Impact

This comment deepened the conversation by introducing the concept of ‘agency’ as distinct from ‘access.’ It influenced subsequent discussions about sovereignty, with other speakers picking up on themes of local fine-tuning, indigenous data models, and the importance of countries maintaining control over their AI development rather than just consuming foreign AI services.


For all practical purposes, every computer on the planet runs Linux… I think there’s an analogy here of being able to use shared infrastructure, shared software infrastructure as a collaboration mechanism that we can all pool resources together but still have sovereignty on top of it.

Speaker

Rafik Rikorian


Reason

This analogy is brilliant because it provides a concrete, successful model for international technological cooperation that maintains sovereignty. The Linux example demonstrates how countries can contribute to and benefit from shared infrastructure while retaining control over their implementations. It offers a tangible pathway forward rather than abstract cooperation concepts.


Impact

This comment introduced a paradigm shift from viewing AI cooperation as zero-sum to seeing it as potentially collaborative. It sparked discussions about open-source models, federated learning, and local fine-tuning throughout the rest of the conversation. Other speakers began referencing specific examples like the Southeast Asian Languages model and data collectives, building on this foundational concept.


We’re living in this world where there are a few frontier model companies that are effectively doing governance for all of us in some way, shape, or form… you don’t want four people in San Francisco making governance decisions for the entire world.

Speaker

Rafik Rikorian


Reason

This comment crystallizes a critical democratic deficit in AI governance that often goes unstated. By highlighting how a small number of private companies are making decisions that affect billions globally, Rikorian exposes the fundamental legitimacy crisis in current AI governance structures.


Impact

This observation reinforced the urgency around finding alternative governance models and gave moral weight to the technical solutions being discussed. It connected the technical discussions about open source and federated learning to broader questions of democratic governance and global equity, elevating the stakes of the conversation.


I believe that countries and especially the countries which we talked about in terms of more from an inclusion point of view, you’ve got to lead with innovation first mindset because I think regulation is required and certainly needed, but I think innovation is probably needed more in some sense.

Speaker

Rajesh Nambia


Reason

This insight challenges the conventional wisdom that governance should lead with regulation. Nambia argues that for developing countries, fostering innovation should take priority over regulatory frameworks. This perspective recognizes that over-regulation could stifle the very capabilities these countries need to develop to participate meaningfully in the AI ecosystem.


Impact

This comment introduced a nuanced perspective on the relationship between innovation and regulation, particularly for developing economies. It influenced the discussion toward more flexible, adaptive governance approaches and reinforced earlier points about the need for countries to build indigenous capabilities rather than just consuming foreign AI technologies.


Overall assessment

These key comments fundamentally shaped the discussion by establishing a realistic, pragmatic framework for AI governance that moved beyond idealistic multilateral aspirations. Wilkinson’s opening reality check set the tone for practical coalition-building, while Nambia’s distinction between access and agency deepened the analysis of what’s truly at stake for developing nations. Rikorian’s Linux analogy provided a concrete model for collaborative sovereignty, shifting the conversation from theoretical to actionable. Together, these insights created a coherent narrative arc: from acknowledging geopolitical constraints, to understanding the stakes for developing nations, to identifying viable pathways forward through open-source collaboration and innovation-first approaches. The discussion evolved from pessimistic realism to cautious optimism, with each speaker building on these foundational insights to explore specific mechanisms for inclusive AI governance.


Follow-up questions

How can we bring rivals and competitors around the same table in AI governance given current geopolitical tensions?

Speaker

Bella Wilkinson


Explanation

This addresses the core governance puzzle of facilitating cooperation between states with minimal alignment of interests, particularly in the context of US-China AI competition


How do we define open standards and open interfaces for AI to enable global collaboration?

Speaker

Rafik Rikorian


Explanation

This is crucial for creating a ‘LAMP stack equivalent’ for AI that would allow countries to maintain sovereignty while contributing to shared infrastructure


What would effective data trust models look like for different regions and communities?

Speaker

Rafik Rikorian


Explanation

Building on examples like Hawaiian genomic data collectives, this explores how communities can maintain control over their data while participating in AI development


How can federated learning be implemented for international AI collaboration in sensitive sectors like healthcare?

Speaker

Rafik Rikorian


Explanation

This would allow countries to contribute data and compute resources without releasing sensitive information across borders


What specific procurement policies could enable cross-border AI cooperation for emerging economies?

Speaker

Halak Shirastava


Explanation

This addresses how policy frameworks can open markets and enable global players to participate in emerging economy AI development


How can sectoral AI governance be developed for different industries like healthcare and financial services?

Speaker

Rajesh Nambia


Explanation

This recognizes that meaningful governance requires understanding sector-specific harms and applications rather than horizontal approaches


What training and capacity building programs are needed for government officials to understand AI governance?

Speaker

Rajesh Nambia


Explanation

This addresses the talent gap in public sector understanding of AI systems and their potential harms, particularly in developing countries


How can the $24 billion in potential savings from switching to open source AI models be realized?

Speaker

Rafik Rikorian


Explanation

This economic argument for open source adoption needs practical implementation strategies to achieve the projected cost savings


What mechanisms can ensure equitable access to compute resources and cloud credits for developing nations?

Speaker

Rajesh Nambia


Explanation

This addresses the fundamental infrastructure barriers that create the ‘AI divide’ between developed and developing countries


How can multilingual AI models like the Southeast Asian Languages Under One Network be scaled and replicated?

Speaker

Bella Wilkinson


Explanation

This explores how successful regional AI collaborations can serve as templates for other geographic and linguistic communities


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.