Smart Regulation Rightsizing Governance for the AI Revolution
20 Feb 2026 17:00h - 18:00h
Smart Regulation Rightsizing Governance for the AI Revolution
Summary
The panel opened by framing the summit’s focus on “governance for an AI-driven world” and the need to give all nations access to AI resources through shared compute and data initiatives [1-5][10-14][15].
Bella Wilkinson argued that a universal AI-governance consensus is unattainable in the current geopolitical climate, but partial alignment on priority issues can be achieved by building coalitions that emphasize sovereignty and strategic autonomy, especially for resource-constrained countries that might pool compute resources [26-28][38-44][43].
Rajesh Nambia highlighted that the emerging “AI divide” will far exceed the previous digital divide, pointing to limited access to high-performance compute, high costs, fragmented and low-quality data, and broader infrastructure gaps such as power and connectivity [56-60][61-66]; he suggested public-private compute consortia, shared GPU clusters and cloud-credit schemes as practical ways for developing economies to participate [132-133].
Rafik Rikorian proposed an open-source model as a template for AI collaboration, likening the universal Linux code base and the LAMP stack to a shared infrastructure that can be locally fine-tuned while preserving digital sovereignty; he called for open standards and interfaces to prevent a handful of frontier-model firms from monopolising AI governance [68-78][84-89][90-96].
Halak Shirastava reinforced the promise of technical standards (e.g., NIST, ISO) and shared risk-mitigation practices, stressing the importance of shared evidence, coordinated procurement policies and interoperability of resources to build capacity in emerging economies; he expressed optimism that increased stakeholder participation will drive measurable progress within the next year [102-108][110-115][188-196][218-224].
Overall, the discussion converged on the view that while global AI-governance consensus is unlikely, targeted coalitions, open-source-inspired frameworks, and shared standards can enable meaningful cooperation and capacity-building for smaller and developing nations.
Keypoints
Major discussion points
– Global AI governance is unlikely to achieve full consensus, but targeted coalitions and partial alignment are feasible.
Bella notes that “global consensus on how to govern AI is a no-go” in the current geopolitical climate, yet “partial alignment on priority issue areas is possible” and can be built through smaller coalitions that later scale via multilateral formats [26-29][36-40][42-44].
– Developing and smaller economies face a multi-layered “AI divide” that goes beyond the traditional digital gap.
Rajesh highlights three core barriers: limited and expensive compute resources; fragmented, low-quality data silos; and foundational infrastructure deficits such as power and connectivity, all of which compound talent shortages [57-63][68-71][73-76].
– Open-source models and shared software infrastructure can provide a pathway to digital sovereignty and collaborative AI development.
Rafik draws an analogy to the Linux/LAMP stack, arguing that a common open-source core with locally-fine-tuned layers would let every nation retain sovereignty while contributing to a shared ecosystem [68-78][80-88][90-96].
– Technical standards, shared risk-mitigation practices, and interoperability are key levers for scaling governance and enabling smaller players.
Halak points to evolving frameworks such as NIST and ISO, the need for shared evaluation documents, and the importance of interoperable resources (e.g., red-team reports, multilingual benchmarks) to avoid “price-out” effects for startups [102-108][110-115][118-124].
– Capacity-building must go beyond workshops to include shared evidence, procurement policy coalitions, and sector-specific governance mechanisms.
Both Halak and Rajesh stress that emerging economies need concrete tools: shared performance benchmarks, cross-border procurement networks, and sector-focused regulatory approaches (e.g., health-care vs. finance) to develop the talent and policies required for responsible AI [184-191][192-199][213-215][219-224].
Overall purpose / goal of the discussion
The panel was convened to explore how the international community can “up-level the playing field” for smaller and developing nations by sharing compute, data, and governance resources, and by identifying practical mechanisms-coalitions, open-source models, standards, and capacity-building-that can foster equitable AI development across sectors such as health, education, and climate resilience [2-5][18-21].
Overall tone and its evolution
– The conversation opens with a pragmatic, somewhat pessimistic tone about the feasibility of worldwide AI governance consensus [26-28].
– It quickly shifts to constructive optimism, emphasizing coalition-building, open-source collaboration, and concrete standards as achievable pathways [40-44][68-78][102-108].
– By the latter half, the tone becomes forward-looking and hopeful, with speakers highlighting imminent progress in standards, capacity-building, and sector-specific governance over the next 12-18 months [211-224][218-224].
Thus, the discussion moves from acknowledging geopolitical constraints to outlining actionable, collaborative solutions that inspire confidence in the near-term future.
Speakers
– Sabina Chofu
– Areas of expertise: International AI policy, governance, multilateral cooperation
– Role/Title: International Policy and Strategy Lead at TechUK (sister association of NASCOM in the UK)
– Affiliation: TechUK
– Bella Wilkinson
– Areas of expertise: Digital society, AI governance, coalition building
– Role/Title: Research Fellow, Digital Society Program
– Affiliation: Chatham House
– Rafik Rikorian
– Areas of expertise: Open-source technology, shared AI infrastructure, standards
– Role/Title: Chief Technology Officer
– Affiliation: Mozilla
– Rajesh Nambia
– Areas of expertise: AI adoption in emerging economies, compute & data infrastructure, public-private partnerships
– Role/Title: President
– Affiliation: NASCOM (National Association of Software and Service Companies, India) [S1]
– Halak Shirastava
– Areas of expertise: Global AI policy, technical standards, interoperability, capacity building
– Role/Title: Global Public Policy Lead (AI)
– Affiliation: Cohere (Canadian AI developer) [S2]
– Audience
– Areas of expertise: –
– Role/Title: Audience member(s)
– Affiliation: –
Additional speakers:
– Navreena Singh – Mentioned as absent; affiliated with Credo AI.
The session opened with Sabina Chofu, International Policy and Strategy Lead at TechUK, who noted that Navreena Singh could not attend because of a meeting with the president and positioned the summit under the theme “governance for an AI-driven world” [2-4][9-11]. She also reminded the audience that TechUK is the sister association of NASCOM in the UK [15-17].
Bella Wilkinson, research fellow on the Digital Society Programme at Chatham House, set a realistic tone by stating that a universal AI-governance consensus is currently a “no-go” in the geopolitical climate [26-28]. She argued that, while full alignment is unattainable, partial alignment on priority issues can be achieved through issue-specific coalitions that may later scale via multilateral formats [12-15]. Wilkinson highlighted the accelerating US-China AI race, the opacity of frontier models, and the erosion of trust in international institutions, and suggested that coalition-building should be framed around “sovereignty and strategic autonomy” for resource-constrained countries [34-37][38-41].
Rajesh Nambia, President of NASCOM India, described the emerging “AI divide” as larger than the earlier digital divide because it concerns both agency and access [56-60]. He identified three inter-linked barriers for emerging economies: (1) severe scarcity and high cost of high-performance compute, even after adjusting for purchasing-power parity [57-60]; (2) fragmented, low-quality data silos across government departments that impede the creation of representative models [61-66]; and (3) foundational infrastructure gaps-including unreliable power, limited clean energy, and insufficient connectivity-that further hinder AI deployment [68-71][73-76]. Nambia cited public-private compute consortia, shared GPU clusters such as India’s AI Mission compute cluster, and cloud-credit schemes from hyperscalers as ways to provide resources without each country having to build a frontier model from scratch [130-133]. He also warned that talent gaps in both AI development and regulatory expertise threaten effective governance [213-215].
Rafik Rikorian, Chief Technology Officer of Mozilla, drew a parallel with the Linux ecosystem, noting that “every computer on the planet runs Linux” and that this model allows anyone to contribute to a common code base while retaining the freedom to fine-tune their own implementations [70-78]. He extended the analogy to the early web, illustrating how the shift to the LAMP stack introduced openness that allowed anyone to build services without needing permission [80-86][87-96]. Applying this to AI, Rikorian described Mozilla’s “Data Collaborative”, a marketplace for ethically sourced, provenance-tracked datasets that compensates data owners (e.g., radio stations) and supplies clean data for model training [157-166]. He also referenced an indigenous data-trust model for Hawaiian genomic data and advocated federated-learning architectures, where model training occurs on local devices and only model weights are shared, preserving data sovereignty while enabling cross-border collaboration on health, language, or other sector-specific models [167-176].
Halak Shirastava, Global AI and Public Policy Lead at Cohere, emphasized the role of evolving technical standards such as NIST and ISO, describing them as “flexible and evolving” frameworks that can avoid “price-out” effects for startups [102-108]. She highlighted shared risk-mitigation practices-joint misuse evaluations, red-team reports, and interoperable multilingual benchmarks-as essential for scaling governance across large tech firms and smaller players [110-115][118-124]. Shirastava then outlined a three-step capacity-building framework: (a) sharing documented evidence and performance benchmarks; (b) establishing coordinated procurement-policy networks to avoid costly country-by-country compliance; and (c) promoting open-source adoption to prevent billions of dollars of waste on proprietary solutions [183-191][188-196].
An audience member raised a comment about the “30 years for FC files” and noted a lingering concern about the speed of systemic reforms; Sabina responded with a confused acknowledgement that the point had not been directly addressed [135-138][140-144].
Returning to coalition-building, Bella highlighted the “Southeast Asian Languages Under One Network”, a multilingual LLM that combines open-source model inputs with local fine-tuning, illustrating how open-source assets can be adapted to regional contexts while supporting robust national institutions and cross-border cooperation [151-155]. Rikorian expanded on this by reiterating the potential of the Mozilla Data Collaborative and federated-learning architectures, and Shirastava reinforced the importance of the three-step capacity-building framework. Rajesh concluded by urging an “innovation-first” mindset, recommending pilot projects and sector-specific governance (e.g., health-care versus finance) before imposing heavy regulation [213-215].
In closing, Sabina summarized the panel’s consensus: (i) targeted, issue-specific coalitions are the most pragmatic route to partial governance alignment; (ii) open-source-inspired infrastructures and open standards can provide shared foundations while preserving national sovereignty; (iii) technical standards (NIST, ISO) and shared risk-mitigation practices are vital for inclusive participation; and (iv) capacity-building must move beyond ad-hoc workshops to systematic sharing of evidence, benchmarks, and procurement frameworks [26-29][36-40][68-78][102-108][184-191]. Shirastava projected that increased stakeholder participation over the next year will accelerate standards development, raise AI literacy across public and private sectors, and deliver concrete capacity-building outcomes [218-224]. Rikorian echoed this optimism, noting that federated-learning and data-trust models are already maturing and could be deployed at scale within the coming months [176].
Notable disagreements were recorded. Nambia emphasized compute access as the primary barrier and advocated an innovation-first approach, whereas Wilkinson placed greater weight on coalition-driven governance mechanisms rather than direct compute provision [57-60][26-29][38-44]. Rikorian’s vision of an open-source stack contrasted with Shirastava’s focus on formal standards bodies, reflecting a tension between community-driven and standards-driven pathways [70-78][102-108]. Finally, Nambia’s “innovation-first” stance conflicted with Shirastava’s claim that early adoption of flexible standards and coordinated procurement policies is essential to avoid costly regulatory fragmentation [213-215][102-108].
Overall, the panel agreed that while a single global AI-governance regime is unlikely, the combination of targeted coalitions, open-source-style shared infrastructures, evolving technical standards, and robust capacity-building programmes offers a viable roadmap for narrowing the AI divide and empowering smaller and developing nations to participate meaningfully in an AI-driven future.
about this morning is right -signing governance for an AI -driven world. So what we’ll try to do with a pretty excellent panel, as I’m sure you’ll agree, is talk a bit about shared computes and data initiatives that hopefully give all nations access to AI resources. We’ll look a bit at how to up -level the playing field for smaller and developing nations. And we’ll talk about collaboration in key sectors like healthcare and education and climate resilience. I’ve got a perfect panel to do that with. I’m going to introduce them all first, and then we’ll dive straight into the conversation. So unfortunately, Navreena Singh from Credo AI couldn’t be with us this morning. She’s got a meeting with the president, so she’s excused.
But we do have… What I start with, just next to me here, Bella Wilkinson, who’s a research fellow on the Digital Society Program. with the Chatham House. Next to her is Rafik Rikorian. I hope I’ve pronounced that vaguely okay, who is the Chief Technology Officer for Mozilla. Next to him, we’ve got Rajesh Nambia, who is the President of NASCOM, our sister association here in India. And last but not least, we’ve got Halak Shirastava, managed, who’s Global AI and Public Policy and Regulatory Affairs at Cohere. And for those of you who don’t know me, I’m Sabina Chofu, I’m International Policy and Strategy Lead at TechUK. So we are the sister association of NASCOM back in the UK.
So without further ado, we will start with setting a bit of a global context, and who better to do that than Isabella. So from a kind of geopolitical perspective, how realistic, I guess, is alignment on AI governance across countries with… fair to say very different strategic interests right now. And where do you see maybe multilateral institutions? I know multilateralism is not a very popular theme these days, but where do you see multilateral institutions or maybe other international players playing a role in this space? So over to you.
Thank you, Sabina. Thanks to my fellow speakers. It’s great to be here today, really keeping the energy up on the final day of the summit. We can all do it. Let me answer your question directly and then perhaps elaborate a little bit more in detail. Global consensus on how to govern AI is a no -go. It is not going to happen in this geopolitical environment. However, partial alignment on priority issue areas is possible, and it’s pragmatic to throw our weight behind these smaller gatherings that we can then scale using the multilateral format. Now, let’s take a second. Let’s take a second to sketch out the state of play. We have some great experts in the room, on the panel, so I won’t spend too long doing this.
We have been absolutely covered in really optimistic summit rhetoric, walking into Bharat Mandapam, going to side events over the course of this week. But despite the optimism outside of these walls in the background, the US -China AI race continues to accelerate to the umpteenth degree. The capabilities of advanced and the most frontier AI systems and models, the little we know about their capabilities, mind, with huge gaps in transparency, continue to advance. And global scientists only recently have issued warnings about the state of the science and the intense uncertainty surrounding these capabilities and the impact they might be having on our communities and societies. well it’s a good thing we have strong international institutions and shared values we don’t you know it’s a really difficult time for global cooperation outside of ai you know we’re seeing i would argue since the second world war an unprecedented degradation of the international organizations the shared values the rule of law that we have all held so dearly so suffice to say it’s a difficult time for global governance it’s difficult time for the global governance of ai now institutions in the past have very much been brokers mediators and scalers of consensus on tricky governance issues and some of the governance problems we’re facing today are pretty old right i mean i’ve encountered them in previous roles at chatham house and other areas of tech i’m sure the experts on our panel have come across them and the core governance puzzle that we need to figure out is this taking into account the state of geopolitics, the uncertainty around the state of the science, the market dynamics mediated by these leading labs and intensely, intensely competitive US and Chinese ARS dynamics, how on earth do we bring rivals and competitors around the same table?
How do we bring states with a nominal or a minimal alignment of interests and incentives into the same room? Now, you started by asking me about multilateralism and institutions, but maybe let’s reframe this and talk about coalitions. In other areas of governance, what we’ve seen is intense coalition building in crisis or unstable settings around a trusted mechanism, a trusted approach, perhaps in the absence of shared values and principles. And what I’m really interested in, in the context of AI, is where coalition building can develop trust around a credible governance approach, adopt a state champion, get support from associations, from builders, from leading labs themselves, and then scale it using the multilateral format. And over the past few days, I’ve been really excited by some of this splintering to scale dynamics that I’ve seen maybe in conversations on verification, on -chip hardware, risk mitigation strategies, even anonymized collection of usage data, which came out of the commitments yesterday.
Now, what’s the messaging that can drive this coalition building in the absence of trusted institutions, in the absence of shared values? I’ll get into this later in my remarks, but I think it has to be sovereignty and strategic autonomy. Resource -constrained countries. who might decide to adopt a common data governance approach, who might decide to pool resources like compute, have to also consider a degree of governance alignment, again, at this low -hanging fruit, in order to not only withstand the dynamics of the AI race, but to ensure that the collective benefits of cooperation and governance alignment massively outweigh anything they could do individually. So I think I’ll leave it there. Slightly pessimistic take. Let’s see if there’s some more optimism on the
Thank you so much, Bella. I don’t think it was that pessimistic. You did kind of, I think you made it sound very pragmatic in terms of, look, the world is not what we want it to be, and there isn’t the level of multilateral cooperation that we maybe used to have. But you have talked about coalition building, and it’s probably the best we can hope for in the world as it is, as opposed to the world as we’d like it right now. And Rajesh, can I turn to you next? For emergence. economy, obviously access to compute data and infrastructure are critical, but what do you see as some of the barriers most pressing, but also maybe opportunities for AI adoption in India and beyond?
Over to you.
First of all, thank you for having me on the panel. Pleased to be with all of you and then a few of you showed up here as well, so thank you for coming up. We wish this was the Modi inauguration last evening, very a little bit more than this crowd, but nevertheless, we’ll do with this. But you know, I believe we used to talk about digital divide for a long period of time, and I think while that had its own puts and takes, when you compare a smaller economy and smaller country with a larger one and so on, I think the AI divide is going to be much, much bigger than the digital divide which we saw, because the biggest difference is that at least in the digital divide, the you know access and so on whereas this is all about agency and then it can completely put you at a different back foot so it is such an important topic to talk about when you talk about the broader you know haves and have nots and what really goes on with the larger smaller economies and so on and I truly believe that the accessibility when you look at the broader scales it will come across multiple things starting with compute one of the largest you know piece of what we are talking about here right I think as you mentioned in terms of the race between the US and China and so on and so forth but if you leave those two countries then of course we have a big drop in terms of where the real access is going to be and I believe totally that you know the continued limited access to the broader compute facility is going to be undue putting some of these smaller countries, especially the developing ones, into a little bit of a disadvantage.
So, I think there’s a lot that can be done around it in terms of saying, you know, what is that, you know, countries can potentially do in terms of pooling and so on. But I think there is certainly an issue when it comes to compute. And, you know, not just in terms of accessibility, but also in terms of expense and so on, because at the end of the day, all of these are, even if you use the purchasing power parity, and then sort of look at what it costs for people to sort of get into the kind of level of GPUs, potentially, or GPU clusters one has to produce to even have a meaningful language model and so on.
I think that’s going to be a very different ballgame. And the second element of this whole broader issue that we’re talking about is also the data and then the organization of data, availability of data, quality of data, and so on. I think the more you get into the development, world you will find our developing world you will find that the the data itself is very siloed in many ways there are you know different state silos different department silos and so on and it gets into a point where the data which is such an important and integral part of everything to do with AI you will end up having the data which gets fed into the broader models and eventually the AI systems will necessarily not have the right representation of that population which is a huge concern I mean even especially when you you know of course India is slightly luckier in many ways in terms of us you know playing that game a little bit you know punching a little bit above our weight in some sense but but when you when you go down on the on the list of countries which do not have access to all of these I think you’re going to find it even even harder in terms of solving the data issue and the data availability data quality all of that this becomes a bigger issue and there when we talked about infrastructure gap compute gap it’s a little bit more than just the pure compute itself gpus and so on but it’s also about connectivity power uh these are the issues which uh you know we somehow take it for granted in other segments but i think you will find that power is going to be a huge uh foundation for all of that and as you know that there are multiple layers in in building any of the ai systems and one of the uh bottom most layer is going to be power and then you know what really happens to the power and if it has to be clean power then you know does it put additional tax on the on the developing world for for making sure that that power comes out clean connectivity is a huge issue even though it’s kind of broadly solved in in some sense with all the um satellite options and so on but we continue to have the kind of connectivity you need to run a truly inclusive ai system is going to be very different from those uh you know people have thought otherwise and then of course we can go on and on in terms of the the other layers of the power and then of course we can go on and on issue, the availability of skills and ensuring that you have the right skills not just to leverage AI but also to build AI, I mean there are two different type of capabilities that you need to produce in any country so these are the issues and how do you make sure that you have a broader the opportunity itself would be to sort of look at this and say are there other ways of collaborating other ways of partnering and so on, because you know these especially when you go down the line, list of countries we have close to 200 countries or so in the world and when you leave the top 5 or 10 and then you go below and then you keep going down the list, it becomes harder I mean I don’t think that everybody is going to be producing a full blown, large language model and things that they need to sort of do it for themselves at that point in time the question will be can you really partner, can you really leverage some of the common systems that can be done across these countries and so on and so on
Thank you. I mean, you’ve done a brilliant job of putting all the free problems we’ve got and then saying you’ve got a long list afterwards in terms of cooperation. But I love the touch of optimism there at the end. It’s like, you know, if you lift a country out of the room, you still have a hundred and whatever, 85 that need to figure it out. So I liked a lot of that framing. And thanks for touching on
I mean, unsurprisingly, being someone from Mozilla, I’ll probably go with the open source angle as one of the opportunities to actually align the talent, align the capabilities, and actually do shared infrastructure. I mean, maybe I’ll draw two analogies to think about, and then we can go more deep into those as it applies to AI. But for all practical purposes, every computer on the planet runs Linux. There are a few iPhones here and there on top of it. But the Linux model, I think, is a good one for all of us to think about, that every computer… Every country, every nation in the world, almost every company in the world, contributes to the single code base which has been deployed across these billions of computing devices across the planet.
And there are lots of derivative work that happens from it. So like a company like Google can then take that and make it into Android. A company like a vending machine company can deploy Linux onto a Raspberry Pi and run inside their vending machine. So I think there’s an analogy here of being able to use shared infrastructure, shared software infrastructure as a collaboration mechanism that we can all pool resources together but still have sovereignty on top of it. So we can still all be contributing to this common core but then fine -tune our way to our own particular implementations. And I think that if we take that and then marry it with a web analogy of in the early 90s of the original web, you needed to ask for permission in order to deploy a website.
And by permission I mean effectively you had to go buy yourself a Solaris box and then you had to go buy yourself you had to buy yourself a Windows NT. server, you’re trying to configure an ActiveX scenario. And the beauty of what Mozilla and Firefox did, we’re not the only ones who did it, but the beauty of what they did there is a forced openness throughout the stack that enabled anyone without permission to build whatever they wanted. And I think we need to find a similar moment. So in that world, we went from the Windows NT stack and all of IIS to the LAMP stack. And the LAMP stack has these gorgeous analogies of just like anyone can build on Linux.
When Facebook needed PHP to move faster, they did massive improvements on PHP, which then trickled down to all of us. So people can contribute in different ways across it. That’s not the world we’re currently living in AI. We’re living in this world where there are a few frontier model companies that are effectively doing governance for all of us in some way, shape, or form. And I agree with my colleague that that’s an untenable situation. I do live in San Francisco, but you don’t want four people in San Francisco. government’s decisions for the entire world that doesn’t make a lot of sense. So I do think if we can find the LAMP stack equivalent model for AI, and this is actually what I’ve turned all of Mozilla towards, of just like how do we define open standards, how do we define open interfaces so that the vibrancy of the open source community can come together and actually build solutions that work for every single person, every single community, every single government on the planet.
You can sort of build upon, you can contribute to the common base, but then build upon it and take it in a way that makes it more aligned with your country’s values or your company’s values or your individual values, and you can fine -tune your solution out of that. So I think there is an analogy here around how open source could actually provide digital sovereignty across all the different levels. Give us agency as a person, give opportunities for flexibility at a corporation level, and then give. Give countries the ability to own their version of the stack. That could actually be quite beautiful if we can actually figure out how to do that in an appropriate way.
I tried to give you a dose of optimism you have given me a dose of optimism but I’m absolutely shocked you talked about open source thanks so much Ralphie and I did appreciate you brought up the standards because I’m going to talk to Halak and we’re going to go a bit into collaboration and standards here so obviously with the myriad of AI governance frameworks I’m going to turn to you on the question of where do you see potential for alignment on standards maybe some interoperability some maybe risk management framework so keep us on the hopeful path please
I am here to provide the hopeful perspective let me start out by saying that I lead global public policy at Cohere. Cohere is a Canadian AI developer we build models and we have agentic AI our solution is called North so in my role I look across the global regulatory framework that means If our startup wants to, you know, do business in a certain country, I try to understand the regulatory landscape of that country, and then I advise our company if it’s favorable or not. When we’re talking about governance and frameworks that are existing, my perspective is I think it’s not there yet, but I have a more promising view of it. I think that in certain principles, we are converging to where we need to go, and there are strong opportunities.
Technical standards is one of them. You know, there are frameworks like NIST and ISO frameworks. For startups, these are key. The reason they’re key is because they’re flexible and they’re evolving. If we just go country by country, what that’s going to do is price out smaller companies. But if we have an international framework that is evolving and flexible and, you know, we’re going to be able to do that, you know, also including industry coalition, which a lot of the model developers are a part of. But also, like, other stakeholders can be a part of as well. I think it really helps. The second thing I would say is around shared practices, around risk mitigation. So I think there’s strong opportunity there as we come together and share documents or, you know, evaluations around misuse or model capabilities or impact of models.
I think, you know, like I said, we have a way to go, but we are moving closer to that. And then the third thing I would say is interoperability of shared resources. This is key, key, key. We have a big ecosystem. So, yes, there is big tech involved, but there are smaller players. And every single day there’s new startups that are wanting to emerge and wanting to have a go -to -market strategy. And the only way this is possible is if industry and all of industry, big and small, the whole ecosystem starts sharing documents and documentation around, you know, red teaming or evals or multilingual benchmarks and things like that to come to some sort of consensus.
Thanks so much. I’m really enjoying this positive vibe we’re going with. And, you know, that combination, I think it kind of links really nicely back to what Bella was saying around coalitions, you know, build on themes, right? It’s like where do we think we have common ground and what we think we can build on. So I really, really enjoyed that contribution. Rajesh, can I turn to you next? Because I did wonder what all this stuff means for, you know, in kind of smaller and developing economies. And maybe if you have any examples of shared standards, pooled resources, any of the stuff that Halak was talking about, public -private models, or anything that you’ve seen that looks promising, that looks like it could deliver.
Thank you.
You know, as we said, the moment you look at shade models, there are multiple reasons why we want to do this. And one, of course, as we’ve talked about, the cost involved in doing some of that. I think that itself is becoming cost prohibitive and hence there may not be even an option for many of the countries but to sort of have this shade model. We also find that in the regional compute consortiums that, you know, folks can potentially create and you often see examples of where, for example, a standard data set and stuff like that being shared by, not just by, you know, even within a country. It could be between government, academia, and then industry sort of sharing the same sort of data sets, making sure that they’re able to leverage that in some sense.
Compute is clearly something which continues. continues to be the, you know, we shared resources in many of the, even in India, for example, you know, our own AI mission has created this cluster where it can be broadly leveraged by both industry, academia, and the government in terms of ensuring that they’re able to get access to the right set of GPU, set of GPU, GPU forms, and ensuring that they’re able to use that and then take it forward. So, public -private sharing of data, certainly the compute consortium, and then cloud credits, I think that’s something which sovereigns have been able to work with the hyperscalers, especially in terms of getting a lot of, you know, cloud credits for the GPUs, especially, right, because, which is needed for even if you, it’s not about building a frontier model, but it’s even to leverage the frontier model, build some reasoning models on top of it, and ensuring that you’re able to build an application which is meaningful, not that every time you need a powerful GPU, but there are occasions where you definitely you would need and then hence you know using some of those cloud credits will become a big need and then of course when you switch to regulations and so on and ensuring that how do you make sure that even having a policy is something which is shared across you know you don’t want to reinvent the wheel every single time so do you have a method by which you could leverage the existing you know look at what is out there in the world and then sort of leverage it and then try to reuse it because what you don’t want is to have this 100 versions of the same thing with a few nuances here and there so that’s something which I think companies will try and create a model as well.
Thank you so much and I’m gonna kind of turn over to
Yes. Yes. Looking forward to a truth transparency and accountability -driven world. It takes 30 years for FC files to come out in a place like America, the developed world. Is that the speed of the system till it collapses and till we start a new world? Are we resigned to that fate?
Yeah, so I can’t really see the link between the Epstein files and the… 30 years since the world was destroyed by Aaron Mulder in 2001. You don’t do the truth to come out. So you don’t have the system speed. Yes. Sure. Thank you. So on… Just to kind of build on what Rajesh was saying there on kind of also the capability. So maybe if we move into a bit of cross -border cooperation and Bella, if I can maybe turn to you just to build on those points. Because obviously what we are seeing across the developing world in particular, often it’s kind of the institutional capacity that’s a bit of an issue there. Yeah. kind of doing all the engagement and all the investments and all the, you know, you kind of still run into.
What are some of the, and I saw you were taking notes furiously, so I’m sure you have reflections on what has been said so far. But also, what are some of the resources
dependencies, figure out what they want to invest in and what dependencies they’re willing to accept, wanting to build strong institutions, again, that can mainline AI directly into public service delivery, and as you said, enable cross -border cooperation, might take a step back and figure out which foreign capabilities or foreign services they’re willing to accept at some levels of the stack and where they’d like to invest in indigenous solutions. And I mentioned open source earlier because this has come up time and time again, and I’m sure it’s going to be absolutely no surprise to our audience here today. An example which has really stuck with me and Rafi, I’d be really interested in your thoughts on this, is the Southeast Asian Languages Under One Network model, so a multilingual sea lion LLM.
And this is something we’ve called, again, in a really interesting collaboration with AI Safety Asia, open models. With local adaptation, really balancing. again inputs from open source models potentially provided by foreign providers with adaptation to a local context and so i think leaving the summit what i’m really going to be interested in is i think this connection between drawing on i guess inputs from the open source community fine -tuning and locally adapting their contributions and then perhaps doing so not only in the service of again strong robust institutions at the national level who are ai ready but also on this kind of collective cross -border level i hope that makes sense
it does and i’m gonna let rafi kind of uh fit into that as well because you’ve uh you’ve uh segwayed really nicely into into his uh part but also um if you can also touch upon feel free to react to what uh bella has said but also if you can also touch upon on on the what you’ve seen as best practice in international and cross -border collaboration maybe in healthcare climate resilience audience education anywhere you’ve seen good stories to tell please do share
i mean i do think a lot about the local fine -tuning and i think that that’s actually a really powerful concept of like we can all contribute to a core and then locally fine -tune for our values and our needs and i think that this has shown up in a bunch of different ways and i’m interested personally in all these alternative i don’t even want to call it our alternative but like other architectures that enable this to be possible because in some ways we’re kind of being being fed a regime that says it’s not possible but i think like architecturally it actually is in a bunch of different ways so i love the indigenous data model like looking at what different indigenous peoples have done around data collectives for their local areas so there’s a group of people for example in hawaii that is doing this for their genomic data because genomic data is really useful for pharmaceutical models and so like they’ve been looking for ways so that they can both monetize but also the provenance of their data as it goes through these pharmaceutical models.
So there are some professors out of UCSD starting to build actually what these data trusts could look like for Hawaiian people, and I think that that model could be replicated in lots of different parts of the world. Mozilla is actually attempting to do a bunch of this. So we’re creating something that we call the Mozilla Data Collaborative, or Collective, sorry. And what the Collective is meant to be is it’s meant to be a marketplace of ethically sourced but provenance -traced data sets so that you can bring your data. It will actually help you scrub it, clean it, et cetera, and also make sure you have the appropriate licenses on it so that people can come find the data sets that they want to train their models but make sure that attribution is given, compensation is given, et cetera.
So we’re literally in conversations with almost every radio station on the planet to try to get their recordings and their transcripts onto the marketplace, not for Mozilla to make money. In fact, we actually want the radio stations. to have a monetization path for all the data that they’re sitting on. simply have it scraped by big model providers to try to soak that into their systems. Instead, require that it be licensed, require that compensation be given. So I think there are models there. And on the computational side, I think there’s also a lot of interesting things showing up around federated learning opportunities. For those of you who don’t know what federated learning is, think of, you know, Google did this very famously when they trained their handwriting model across everyone’s Android phones.
So your handwriting is very personal and private. Your handwriting is on your device. And Google is able to train a handwriting recognition model that didn’t require them to get access to your data, because part of the training happened on your phone, and then the model wait through shipped it back up for centralized training. And I think something like that actually could be an interesting model for international collaboration of like, I can bring my data to the game, my healthcare data, my values data, my language data, but not have to release it to a different company, or sorry, a different country. Instead, allow you to do it in a different way. And I think that’s a really interesting model.
Thank you. of the training on my compute, on my infrastructure, and only ship model weights back up, and actually then create bigger models across borders and across geographies that could actually take into account different healthcare scenarios, different value systems, et cetera, in there. So I think that there are these interesting alternative architectures that we can actually start leaning into, these data trust models, these federated learning models, that actually could be massive enablers for cooperation and allow us to build these foundational things that we can then fine -tune and bring to our local context.
Thanks so much, Rafi. That fine -tuning seems to be definitely a thing in this conversation, how you kind of built for different cultures and countries. And Halak, maybe I can come to you next, because we keep talking about kind of international cooperation and coordination. But I’m wondering, how do you translate that? played that, you know, chit -chat into actual skills, capacity, capability for emerging economies. And, you know, I mean, we are in a very international AI Impact Summit. So, you know, kind of how do we bring that from we talk about governance to all this international policy actually delivering for emerging economies?
It’s a good question. I think let me start out by saying capacity building isn’t just, you know, running workshops or basically talking to regulators about, you know, this should be done. What it is is capacity building, I think, for emerging economies especially is critical because – hold on. Let me think. Okay, so emerging economies have unequal access to data, information, and technology, right? So what are we trying to solve for here in terms of capacity building? The first thing I would say is shared evidence. So what we need is we need players to help into this capacity building system with documents, results, performance, benchmarks, to lift up other players. That, I think, would be number one.
The second thing I think is key and sometimes overlooked is the value of, like, procurement policies. And I agree with Isabella. What if we had, like, an industry coalition, like a cross -border network, where they’re solving for procurement policies or procurement rules? And what this does is this brings in global players. So now what you’re doing is you’re opening up your country to different markets. The next thing I would say is, like, you know, a lot of – Let me think. Let me put it this way. So there are developers who develop the technology, and then there are deployers. They buy the technology, and they use it. So, for example, like a public sector agency. Why is it so – Thank you.
Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. you you Thank you. Thank you. economist Frank Nagel has a report recently that approximately 24 billion U .S. dollars are being wasted by not switching to open source models right now. So the economics are starting to make a lot of sense. So I think once all these stars align, it becomes almost obvious what an answer could look like for local governments around open source AI models, et cetera. So I’m really excited for that in the next 12 to 18 months.
Thank you. Rajesh?
No, I agree with both of what’s been said so far, but I also want to give a sense of I think it’s when you look at AI governance and people tend to sort of lead with regulatory regulation first. I believe that countries and especially the countries which we talked about in terms of more from an inclusion point of view, you’ve got to lead with innovation first mindset because I think regulation is required and certainly needed, but I think innovation is probably needed more in some sense. And also when you look at the AI governance, governance across all of that. we do while there could be horizontal governance which will apply to every AI systems I think the more meaningful governance that you’re going to find when you get into sectoral governance meaning when you look at the AI systems for health care and you’ll find there are the understanding of a harm in the health care segment is very different from financial services and so on so how do you get into those sectoral areas then you can have meaningful governance structure and last but not the least you need to have the right talent and people who can actually who understand all of this in both in public sector and people who are supposedly governing all of this that is something which is it’s not the talent in terms of broader AI model building and people who are building AI systems but how do you make sure that the talent in the governance space in the governments and people who are actually regulating it they don’t understand the real harm and then it’s going to be a bigger issue and especially when it comes to the you know the list of countries that we talked about always when you get deeper down the list you will find that the talent is going to be here and there and you will find that the talent is going to be here and there and you will find that be here and there and you will find that the talent is going to be here and there and you will find that the talent is going to be here and there and you will find that the talent issue in terms of understanding.
Thank you. And, you know, as someone who lives in Brussels, I’ll make sure to take that message back. Halak.
Okay, so what am I most excited about, I guess, in the next 12 months? I mean, in the last few days, you’ve seen companies really, really excited about AI, but what you’ve also seen is countries very excited about AI. So what does this mean in governance? It means that the community and the participation is only going to increase. I don’t see it going backwards. And so, as technology is evolving, more players are going to have a voice in the system, and the standards and the ITU bodies or the ISO bodies, and I think because of this convergence, we are going to, as society, just, like, increase our, like, literacy of not only AI, but technology, but also bring it into whatever we’re in, if we’re in the private sector, if we’re in the public sector.
And because of that, I think we’re going to have to Yeah, I think a lot of progress will be made in the next 12 months, and you’ll see it as it converges.
Thank you so much. Thanks to all the panel. Thanks for being here, and enjoy the rest of your day. Thank you. Thank you.
This discussion revealed both the substantial challenges in translating AI governance principles into practice and the significant potential for progress through collaborative approaches. While obstac…
EventThe discussion highlighted the complex and multifaceted nature of AI governance challenges. While there was broad agreement on the need for global cooperation and comprehensive governance frameworks, …
EventThe rapid development of AI technology has outpaced existing regulatory frameworks, creating challenges in areas such as generative AI and synthetic disinformation. These advancements have had a negat…
UpdatesThis comment is deeply insightful because it cuts through the optimistic summit rhetoric to present a stark geopolitical reality. Wilkinson acknowledges the fundamental constraints of current internat…
Event## Key Challenges and Opportunities Lacina Kone: Before talking about the bridging of AI, bridging the gap of the AI, there are gaps already, digital gap. You have to remember one thing, the AI sits …
EventMehmed Sait Akman:Thank you very much. Let me express my thank you very much again and for your kind invitation to this panel, which is actually bringing two major transformation process, digital tran…
EventThe discussion revealed that the challenge extends beyond inequitable distribution to an overall supply-demand gap affecting even developed nations. Panelists emphasized that solutions require multi-s…
EventWe deeply appreciate the kind hospitality we have received this week in India at the India AI Impact Summit. Costa Rica is a small, open economy from Latin America and the Caribbean, a region of 33 co…
EventHence, even thoughICTs provide opportunities for economic growth and social development, they have the potential to exclude those who cannot access or benefit from them. This can further lead to adigi…
BlogThe level of consensus among the speakers was relatively high, particularly on the benefits and potential applications of AI in government. This strong agreement implies a shared vision for the future…
EventThe balance between open-source development and community sovereignty presents ongoing challenges. While open-source approaches can accelerate innovation and reduce costs, communities need assurance t…
EventChoi advocates for AI democratization where AI reflects human knowledge and values, serves all humans rather than just those in power, and can be created by different countries and organizations, not …
EventAccess to open markets through regulation is highlighted as beneficial for small messaging companies. This provides opportunities for small messaging companies to reach billions of users by leveraging…
EventEtienne Chaponniere from Qualcomm brought a unique perspective as a chipset provider, emphasising the democratising potential of standards. He noted that whilst large companies have resources to devel…
EventHarmonization between stakeholders is essential for the successful deployment of 6G. Standardization, scalability, and interoperability are key factors in achieving harmonization.
EventDeveloping institutional and professional capacities is recognised in various forums as a precondition for successful implementation of confidence-building measures. Capacity building, however, goes b…
BlogDr. Muhammad Shabbir: Thank you very much, Rajendra, and thank you very much to my colleagues who have spoken before me. I thank them for making my job a little bit more easier. But I take a little bi…
EventJapan: Thank you, Mr. Chair. Japan believes that capacity building is essential to maintaining peace and stability and promoting a free, fair, and secure cyberspace. The United Nations is well pos…
EventThe discussion maintained a thoughtful but increasingly cautious tone throughout. It began optimistically, with speakers drawing encouraging parallels between Internet and AI governance challenges. Ho…
EventThe discussion maintained a thoughtful but somewhat cautious tone throughout, with speakers acknowledging both opportunities and significant challenges. While there were moments of optimism about AI’s…
EventThe discussion began with a cautiously optimistic tone, acknowledging both opportunities and risks. However, the tone became increasingly concerned and urgent as the conversation progressed, particula…
EventThe Role of International Institutions in Setting Norms for Advanced TechnologiesThe discussions across IGF 2025 sessions revealed a strong consensus that international institutions must play a centra…
EventA conscientious request for clarity and specificity was also apparent, underlining the need for concrete, actionable plans capable of genuinely uplifting and supporting diverse populations. The consen…
EventThe unwaveringly positive sentiment underlines a strong conviction in the potential of collective and inclusive efforts to lead to societal progress and fair development. Note: The original summary di…
EventThe tone was largely collaborative and solution-oriented. Speakers approached the topic from different perspectives but shared a common interest in improving standards processes. The tone became more …
EventThe declaration was developed through an inclusive consultation process within the International Advisory Body on Submarine Cable Resilience and was agreed this morning ahead of this summit. For parti…
EventThe discussion maintained a professional and collaborative tone throughout, characterized by urgency about the scale of information integrity challenges but also cautious optimism about potential solu…
EventThe tone was generally optimistic and forward-looking, with panelists highlighting opportunities for innovation and progress. However, there were also notes of caution about hype and unrealistic expec…
EventThe tone was largely optimistic and forward-looking, with speakers enthusiastically sharing their visions and initiatives for technological advancement. There was a collaborative spirit, with panelist…
EventThe tone is optimistic and collaborative throughout, with speakers sharing concrete examples of successful implementations and expressing confidence in achieving ambitious goals. There’s a sense of ur…
EventThe overall tone was optimistic and forward-looking, with speakers highlighting the transformative potential of technology for government. There was a sense of urgency about the need for governments t…
EventThe panel discussion provided a comprehensive exploration of the gig economy’s impact on the future of work. While acknowledging the challenges, the overall tone was optimistic and forward-looking. Th…
Event“Sabina Chofu is the International Policy and Strategy Lead at TechUK, and TechUK is the sister association of NASCOM in the UK.”
The knowledge base lists Sabina Chofu as International Policy and Strategy Lead at TechUK and notes that TechUK is the sister association of NASCOM in the UK, confirming the report’s statement.
“India is creating public‑private compute consortia, shared GPU clusters such as the AI Mission compute cluster, and cloud‑credit schemes from hyperscalers to provide AI resources without each country having to build a frontier model from scratch.”
A recent Indian white‑paper described a national push to democratise AI infrastructure, treating compute, datasets and models as digital public goods and encouraging shared resources, which adds context to the reported compute‑consortium initiatives.
“The emerging “AI divide” is larger than the earlier digital divide because it concerns both agency and access.”
Discussion in the knowledge base about policy levers to bridge the AI divide highlights that the divide now encompasses issues of agency and access beyond the traditional digital‑access gap, providing additional nuance to the claim.
The panel shows a clear convergence on three pillars: (1) coalition building and issue‑specific alignment as the pragmatic route for AI governance; (2) the adoption of open, interoperable standards and open‑source infrastructure to preserve sovereignty while enabling collaboration; (3) capacity building through shared evidence, benchmarks and talent development, complemented by public‑private resource‑sharing mechanisms. While there is agreement that a universal global consensus is unattainable, participants differ on the balance between innovation‑first approaches and regulatory frameworks.
Moderate to high consensus on practical cooperation mechanisms (coalitions, open standards, capacity building) but low consensus on the feasibility of a single global governance regime, implying that future policy work should focus on building issue‑specific coalitions, open‑source ecosystems, and shared capacity‑building initiatives.
The panel largely concurs that a universal AI governance consensus is unattainable and that coalition‑building is essential. However, substantive disagreements emerge around the sequencing of innovation versus regulatory frameworks, the preferred technical mechanism for shared AI infrastructure (open‑source versus formal standards), and the primary barrier to AI adoption (compute access versus coalition‑driven governance). An unexpected tension appears between audience expectations for rapid transparency and the panel’s limited engagement with that demand.
Moderate to high: while there is consensus on the need for cooperation, the differing views on how to operationalize capacity building, infrastructure sharing, and regulatory sequencing could impede coordinated action, especially for emerging economies seeking concrete pathways.
The discussion pivoted from an initial, high‑level framing of AI governance to a grounded, solution‑oriented dialogue thanks to a handful of incisive remarks. Bella’s realistic appraisal of global consensus set a pragmatic baseline, while Rajesh’s exposition of the multi‑layered AI divide supplied the concrete challenges that needed addressing. Rafik’s open‑source analogies and data‑collaborative proposal, together with Halak’s focus on evolving technical standards and systemic capacity‑building, supplied actionable pathways for coalition‑building and shared infrastructure. Subsequent comments on innovation‑first, sectoral governance, and talent development deepened the conversation, steering it toward implementable policies for emerging economies. Collectively, these key comments reshaped the tone from speculative to constructive, aligning the panel around tangible mechanisms—open standards, data trusts, federated learning, and procurement coalitions—to bridge the AI divide.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

