Smart Regulation Rightsizing Governance for the AI Revolution

20 Feb 2026 17:00h - 18:00h

Smart Regulation Rightsizing Governance for the AI Revolution

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by framing the summit’s focus on “governance for an AI-driven world” and the need to give all nations access to AI resources through shared compute and data initiatives [1-5][10-14][15].


Bella Wilkinson argued that a universal AI-governance consensus is unattainable in the current geopolitical climate, but partial alignment on priority issues can be achieved by building coalitions that emphasize sovereignty and strategic autonomy, especially for resource-constrained countries that might pool compute resources [26-28][38-44][43].


Rajesh Nambia highlighted that the emerging “AI divide” will far exceed the previous digital divide, pointing to limited access to high-performance compute, high costs, fragmented and low-quality data, and broader infrastructure gaps such as power and connectivity [56-60][61-66]; he suggested public-private compute consortia, shared GPU clusters and cloud-credit schemes as practical ways for developing economies to participate [132-133].


Rafik Rikorian proposed an open-source model as a template for AI collaboration, likening the universal Linux code base and the LAMP stack to a shared infrastructure that can be locally fine-tuned while preserving digital sovereignty; he called for open standards and interfaces to prevent a handful of frontier-model firms from monopolising AI governance [68-78][84-89][90-96].


Halak Shirastava reinforced the promise of technical standards (e.g., NIST, ISO) and shared risk-mitigation practices, stressing the importance of shared evidence, coordinated procurement policies and interoperability of resources to build capacity in emerging economies; he expressed optimism that increased stakeholder participation will drive measurable progress within the next year [102-108][110-115][188-196][218-224].


Overall, the discussion converged on the view that while global AI-governance consensus is unlikely, targeted coalitions, open-source-inspired frameworks, and shared standards can enable meaningful cooperation and capacity-building for smaller and developing nations.


Keypoints


Major discussion points


Global AI governance is unlikely to achieve full consensus, but targeted coalitions and partial alignment are feasible.


Bella notes that “global consensus on how to govern AI is a no-go” in the current geopolitical climate, yet “partial alignment on priority issue areas is possible” and can be built through smaller coalitions that later scale via multilateral formats [26-29][36-40][42-44].


Developing and smaller economies face a multi-layered “AI divide” that goes beyond the traditional digital gap.


Rajesh highlights three core barriers: limited and expensive compute resources; fragmented, low-quality data silos; and foundational infrastructure deficits such as power and connectivity, all of which compound talent shortages [57-63][68-71][73-76].


Open-source models and shared software infrastructure can provide a pathway to digital sovereignty and collaborative AI development.


Rafik draws an analogy to the Linux/LAMP stack, arguing that a common open-source core with locally-fine-tuned layers would let every nation retain sovereignty while contributing to a shared ecosystem [68-78][80-88][90-96].


Technical standards, shared risk-mitigation practices, and interoperability are key levers for scaling governance and enabling smaller players.


Halak points to evolving frameworks such as NIST and ISO, the need for shared evaluation documents, and the importance of interoperable resources (e.g., red-team reports, multilingual benchmarks) to avoid “price-out” effects for startups [102-108][110-115][118-124].


Capacity-building must go beyond workshops to include shared evidence, procurement policy coalitions, and sector-specific governance mechanisms.


Both Halak and Rajesh stress that emerging economies need concrete tools: shared performance benchmarks, cross-border procurement networks, and sector-focused regulatory approaches (e.g., health-care vs. finance) to develop the talent and policies required for responsible AI [184-191][192-199][213-215][219-224].


Overall purpose / goal of the discussion


The panel was convened to explore how the international community can “up-level the playing field” for smaller and developing nations by sharing compute, data, and governance resources, and by identifying practical mechanisms-coalitions, open-source models, standards, and capacity-building-that can foster equitable AI development across sectors such as health, education, and climate resilience [2-5][18-21].


Overall tone and its evolution


– The conversation opens with a pragmatic, somewhat pessimistic tone about the feasibility of worldwide AI governance consensus [26-28].


– It quickly shifts to constructive optimism, emphasizing coalition-building, open-source collaboration, and concrete standards as achievable pathways [40-44][68-78][102-108].


– By the latter half, the tone becomes forward-looking and hopeful, with speakers highlighting imminent progress in standards, capacity-building, and sector-specific governance over the next 12-18 months [211-224][218-224].


Thus, the discussion moves from acknowledging geopolitical constraints to outlining actionable, collaborative solutions that inspire confidence in the near-term future.


Speakers

Sabina Chofu


Areas of expertise: International AI policy, governance, multilateral cooperation


Role/Title: International Policy and Strategy Lead at TechUK (sister association of NASCOM in the UK)


Affiliation: TechUK


Bella Wilkinson


Areas of expertise: Digital society, AI governance, coalition building


Role/Title: Research Fellow, Digital Society Program


Affiliation: Chatham House


Rafik Rikorian


Areas of expertise: Open-source technology, shared AI infrastructure, standards


Role/Title: Chief Technology Officer


Affiliation: Mozilla


Rajesh Nambia


Areas of expertise: AI adoption in emerging economies, compute & data infrastructure, public-private partnerships


Role/Title: President


Affiliation: NASCOM (National Association of Software and Service Companies, India) [S1]


Halak Shirastava


Areas of expertise: Global AI policy, technical standards, interoperability, capacity building


Role/Title: Global Public Policy Lead (AI)


Affiliation: Cohere (Canadian AI developer) [S2]


Audience


Areas of expertise:


Role/Title: Audience member(s)


Affiliation:


Additional speakers:


Navreena Singh – Mentioned as absent; affiliated with Credo AI.


Full session reportComprehensive analysis and detailed insights

The session opened with Sabina Chofu, International Policy and Strategy Lead at TechUK, who noted that Navreena Singh could not attend because of a meeting with the president and positioned the summit under the theme “governance for an AI-driven world” [2-4][9-11]. She also reminded the audience that TechUK is the sister association of NASCOM in the UK [15-17].


Bella Wilkinson, research fellow on the Digital Society Programme at Chatham House, set a realistic tone by stating that a universal AI-governance consensus is currently a “no-go” in the geopolitical climate [26-28]. She argued that, while full alignment is unattainable, partial alignment on priority issues can be achieved through issue-specific coalitions that may later scale via multilateral formats [12-15]. Wilkinson highlighted the accelerating US-China AI race, the opacity of frontier models, and the erosion of trust in international institutions, and suggested that coalition-building should be framed around “sovereignty and strategic autonomy” for resource-constrained countries [34-37][38-41].


Rajesh Nambia, President of NASCOM India, described the emerging “AI divide” as larger than the earlier digital divide because it concerns both agency and access [56-60]. He identified three inter-linked barriers for emerging economies: (1) severe scarcity and high cost of high-performance compute, even after adjusting for purchasing-power parity [57-60]; (2) fragmented, low-quality data silos across government departments that impede the creation of representative models [61-66]; and (3) foundational infrastructure gaps-including unreliable power, limited clean energy, and insufficient connectivity-that further hinder AI deployment [68-71][73-76]. Nambia cited public-private compute consortia, shared GPU clusters such as India’s AI Mission compute cluster, and cloud-credit schemes from hyperscalers as ways to provide resources without each country having to build a frontier model from scratch [130-133]. He also warned that talent gaps in both AI development and regulatory expertise threaten effective governance [213-215].


Rafik Rikorian, Chief Technology Officer of Mozilla, drew a parallel with the Linux ecosystem, noting that “every computer on the planet runs Linux” and that this model allows anyone to contribute to a common code base while retaining the freedom to fine-tune their own implementations [70-78]. He extended the analogy to the early web, illustrating how the shift to the LAMP stack introduced openness that allowed anyone to build services without needing permission [80-86][87-96]. Applying this to AI, Rikorian described Mozilla’s “Data Collaborative”, a marketplace for ethically sourced, provenance-tracked datasets that compensates data owners (e.g., radio stations) and supplies clean data for model training [157-166]. He also referenced an indigenous data-trust model for Hawaiian genomic data and advocated federated-learning architectures, where model training occurs on local devices and only model weights are shared, preserving data sovereignty while enabling cross-border collaboration on health, language, or other sector-specific models [167-176].


Halak Shirastava, Global AI and Public Policy Lead at Cohere, emphasized the role of evolving technical standards such as NIST and ISO, describing them as “flexible and evolving” frameworks that can avoid “price-out” effects for startups [102-108]. She highlighted shared risk-mitigation practices-joint misuse evaluations, red-team reports, and interoperable multilingual benchmarks-as essential for scaling governance across large tech firms and smaller players [110-115][118-124]. Shirastava then outlined a three-step capacity-building framework: (a) sharing documented evidence and performance benchmarks; (b) establishing coordinated procurement-policy networks to avoid costly country-by-country compliance; and (c) promoting open-source adoption to prevent billions of dollars of waste on proprietary solutions [183-191][188-196].


An audience member raised a comment about the “30 years for FC files” and noted a lingering concern about the speed of systemic reforms; Sabina responded with a confused acknowledgement that the point had not been directly addressed [135-138][140-144].


Returning to coalition-building, Bella highlighted the “Southeast Asian Languages Under One Network”, a multilingual LLM that combines open-source model inputs with local fine-tuning, illustrating how open-source assets can be adapted to regional contexts while supporting robust national institutions and cross-border cooperation [151-155]. Rikorian expanded on this by reiterating the potential of the Mozilla Data Collaborative and federated-learning architectures, and Shirastava reinforced the importance of the three-step capacity-building framework. Rajesh concluded by urging an “innovation-first” mindset, recommending pilot projects and sector-specific governance (e.g., health-care versus finance) before imposing heavy regulation [213-215].


In closing, Sabina summarized the panel’s consensus: (i) targeted, issue-specific coalitions are the most pragmatic route to partial governance alignment; (ii) open-source-inspired infrastructures and open standards can provide shared foundations while preserving national sovereignty; (iii) technical standards (NIST, ISO) and shared risk-mitigation practices are vital for inclusive participation; and (iv) capacity-building must move beyond ad-hoc workshops to systematic sharing of evidence, benchmarks, and procurement frameworks [26-29][36-40][68-78][102-108][184-191]. Shirastava projected that increased stakeholder participation over the next year will accelerate standards development, raise AI literacy across public and private sectors, and deliver concrete capacity-building outcomes [218-224]. Rikorian echoed this optimism, noting that federated-learning and data-trust models are already maturing and could be deployed at scale within the coming months [176].


Notable disagreements were recorded. Nambia emphasized compute access as the primary barrier and advocated an innovation-first approach, whereas Wilkinson placed greater weight on coalition-driven governance mechanisms rather than direct compute provision [57-60][26-29][38-44]. Rikorian’s vision of an open-source stack contrasted with Shirastava’s focus on formal standards bodies, reflecting a tension between community-driven and standards-driven pathways [70-78][102-108]. Finally, Nambia’s “innovation-first” stance conflicted with Shirastava’s claim that early adoption of flexible standards and coordinated procurement policies is essential to avoid costly regulatory fragmentation [213-215][102-108].


Overall, the panel agreed that while a single global AI-governance regime is unlikely, the combination of targeted coalitions, open-source-style shared infrastructures, evolving technical standards, and robust capacity-building programmes offers a viable roadmap for narrowing the AI divide and empowering smaller and developing nations to participate meaningfully in an AI-driven future.


Session transcriptComplete transcript of the session
Sabina Chofu

about this morning is right -signing governance for an AI -driven world. So what we’ll try to do with a pretty excellent panel, as I’m sure you’ll agree, is talk a bit about shared computes and data initiatives that hopefully give all nations access to AI resources. We’ll look a bit at how to up -level the playing field for smaller and developing nations. And we’ll talk about collaboration in key sectors like healthcare and education and climate resilience. I’ve got a perfect panel to do that with. I’m going to introduce them all first, and then we’ll dive straight into the conversation. So unfortunately, Navreena Singh from Credo AI couldn’t be with us this morning. She’s got a meeting with the president, so she’s excused.

But we do have… What I start with, just next to me here, Bella Wilkinson, who’s a research fellow on the Digital Society Program. with the Chatham House. Next to her is Rafik Rikorian. I hope I’ve pronounced that vaguely okay, who is the Chief Technology Officer for Mozilla. Next to him, we’ve got Rajesh Nambia, who is the President of NASCOM, our sister association here in India. And last but not least, we’ve got Halak Shirastava, managed, who’s Global AI and Public Policy and Regulatory Affairs at Cohere. And for those of you who don’t know me, I’m Sabina Chofu, I’m International Policy and Strategy Lead at TechUK. So we are the sister association of NASCOM back in the UK.

So without further ado, we will start with setting a bit of a global context, and who better to do that than Isabella. So from a kind of geopolitical perspective, how realistic, I guess, is alignment on AI governance across countries with… fair to say very different strategic interests right now. And where do you see maybe multilateral institutions? I know multilateralism is not a very popular theme these days, but where do you see multilateral institutions or maybe other international players playing a role in this space? So over to you.

Bella Wilkinson

Thank you, Sabina. Thanks to my fellow speakers. It’s great to be here today, really keeping the energy up on the final day of the summit. We can all do it. Let me answer your question directly and then perhaps elaborate a little bit more in detail. Global consensus on how to govern AI is a no -go. It is not going to happen in this geopolitical environment. However, partial alignment on priority issue areas is possible, and it’s pragmatic to throw our weight behind these smaller gatherings that we can then scale using the multilateral format. Now, let’s take a second. Let’s take a second to sketch out the state of play. We have some great experts in the room, on the panel, so I won’t spend too long doing this.

We have been absolutely covered in really optimistic summit rhetoric, walking into Bharat Mandapam, going to side events over the course of this week. But despite the optimism outside of these walls in the background, the US -China AI race continues to accelerate to the umpteenth degree. The capabilities of advanced and the most frontier AI systems and models, the little we know about their capabilities, mind, with huge gaps in transparency, continue to advance. And global scientists only recently have issued warnings about the state of the science and the intense uncertainty surrounding these capabilities and the impact they might be having on our communities and societies. well it’s a good thing we have strong international institutions and shared values we don’t you know it’s a really difficult time for global cooperation outside of ai you know we’re seeing i would argue since the second world war an unprecedented degradation of the international organizations the shared values the rule of law that we have all held so dearly so suffice to say it’s a difficult time for global governance it’s difficult time for the global governance of ai now institutions in the past have very much been brokers mediators and scalers of consensus on tricky governance issues and some of the governance problems we’re facing today are pretty old right i mean i’ve encountered them in previous roles at chatham house and other areas of tech i’m sure the experts on our panel have come across them and the core governance puzzle that we need to figure out is this taking into account the state of geopolitics, the uncertainty around the state of the science, the market dynamics mediated by these leading labs and intensely, intensely competitive US and Chinese ARS dynamics, how on earth do we bring rivals and competitors around the same table?

How do we bring states with a nominal or a minimal alignment of interests and incentives into the same room? Now, you started by asking me about multilateralism and institutions, but maybe let’s reframe this and talk about coalitions. In other areas of governance, what we’ve seen is intense coalition building in crisis or unstable settings around a trusted mechanism, a trusted approach, perhaps in the absence of shared values and principles. And what I’m really interested in, in the context of AI, is where coalition building can develop trust around a credible governance approach, adopt a state champion, get support from associations, from builders, from leading labs themselves, and then scale it using the multilateral format. And over the past few days, I’ve been really excited by some of this splintering to scale dynamics that I’ve seen maybe in conversations on verification, on -chip hardware, risk mitigation strategies, even anonymized collection of usage data, which came out of the commitments yesterday.

Now, what’s the messaging that can drive this coalition building in the absence of trusted institutions, in the absence of shared values? I’ll get into this later in my remarks, but I think it has to be sovereignty and strategic autonomy. Resource -constrained countries. who might decide to adopt a common data governance approach, who might decide to pool resources like compute, have to also consider a degree of governance alignment, again, at this low -hanging fruit, in order to not only withstand the dynamics of the AI race, but to ensure that the collective benefits of cooperation and governance alignment massively outweigh anything they could do individually. So I think I’ll leave it there. Slightly pessimistic take. Let’s see if there’s some more optimism on the

Sabina Chofu

Thank you so much, Bella. I don’t think it was that pessimistic. You did kind of, I think you made it sound very pragmatic in terms of, look, the world is not what we want it to be, and there isn’t the level of multilateral cooperation that we maybe used to have. But you have talked about coalition building, and it’s probably the best we can hope for in the world as it is, as opposed to the world as we’d like it right now. And Rajesh, can I turn to you next? For emergence. economy, obviously access to compute data and infrastructure are critical, but what do you see as some of the barriers most pressing, but also maybe opportunities for AI adoption in India and beyond?

Over to you.

Rajesh Nambia

First of all, thank you for having me on the panel. Pleased to be with all of you and then a few of you showed up here as well, so thank you for coming up. We wish this was the Modi inauguration last evening, very a little bit more than this crowd, but nevertheless, we’ll do with this. But you know, I believe we used to talk about digital divide for a long period of time, and I think while that had its own puts and takes, when you compare a smaller economy and smaller country with a larger one and so on, I think the AI divide is going to be much, much bigger than the digital divide which we saw, because the biggest difference is that at least in the digital divide, the you know access and so on whereas this is all about agency and then it can completely put you at a different back foot so it is such an important topic to talk about when you talk about the broader you know haves and have nots and what really goes on with the larger smaller economies and so on and I truly believe that the accessibility when you look at the broader scales it will come across multiple things starting with compute one of the largest you know piece of what we are talking about here right I think as you mentioned in terms of the race between the US and China and so on and so forth but if you leave those two countries then of course we have a big drop in terms of where the real access is going to be and I believe totally that you know the continued limited access to the broader compute facility is going to be undue putting some of these smaller countries, especially the developing ones, into a little bit of a disadvantage.

So, I think there’s a lot that can be done around it in terms of saying, you know, what is that, you know, countries can potentially do in terms of pooling and so on. But I think there is certainly an issue when it comes to compute. And, you know, not just in terms of accessibility, but also in terms of expense and so on, because at the end of the day, all of these are, even if you use the purchasing power parity, and then sort of look at what it costs for people to sort of get into the kind of level of GPUs, potentially, or GPU clusters one has to produce to even have a meaningful language model and so on.

I think that’s going to be a very different ballgame. And the second element of this whole broader issue that we’re talking about is also the data and then the organization of data, availability of data, quality of data, and so on. I think the more you get into the development, world you will find our developing world you will find that the the data itself is very siloed in many ways there are you know different state silos different department silos and so on and it gets into a point where the data which is such an important and integral part of everything to do with AI you will end up having the data which gets fed into the broader models and eventually the AI systems will necessarily not have the right representation of that population which is a huge concern I mean even especially when you you know of course India is slightly luckier in many ways in terms of us you know playing that game a little bit you know punching a little bit above our weight in some sense but but when you when you go down on the on the list of countries which do not have access to all of these I think you’re going to find it even even harder in terms of solving the data issue and the data availability data quality all of that this becomes a bigger issue and there when we talked about infrastructure gap compute gap it’s a little bit more than just the pure compute itself gpus and so on but it’s also about connectivity power uh these are the issues which uh you know we somehow take it for granted in other segments but i think you will find that power is going to be a huge uh foundation for all of that and as you know that there are multiple layers in in building any of the ai systems and one of the uh bottom most layer is going to be power and then you know what really happens to the power and if it has to be clean power then you know does it put additional tax on the on the developing world for for making sure that that power comes out clean connectivity is a huge issue even though it’s kind of broadly solved in in some sense with all the um satellite options and so on but we continue to have the kind of connectivity you need to run a truly inclusive ai system is going to be very different from those uh you know people have thought otherwise and then of course we can go on and on in terms of the the other layers of the power and then of course we can go on and on issue, the availability of skills and ensuring that you have the right skills not just to leverage AI but also to build AI, I mean there are two different type of capabilities that you need to produce in any country so these are the issues and how do you make sure that you have a broader the opportunity itself would be to sort of look at this and say are there other ways of collaborating other ways of partnering and so on, because you know these especially when you go down the line, list of countries we have close to 200 countries or so in the world and when you leave the top 5 or 10 and then you go below and then you keep going down the list, it becomes harder I mean I don’t think that everybody is going to be producing a full blown, large language model and things that they need to sort of do it for themselves at that point in time the question will be can you really partner, can you really leverage some of the common systems that can be done across these countries and so on and so on

Sabina Chofu

Thank you. I mean, you’ve done a brilliant job of putting all the free problems we’ve got and then saying you’ve got a long list afterwards in terms of cooperation. But I love the touch of optimism there at the end. It’s like, you know, if you lift a country out of the room, you still have a hundred and whatever, 85 that need to figure it out. So I liked a lot of that framing. And thanks for touching on

Rafik Rikorian

I mean, unsurprisingly, being someone from Mozilla, I’ll probably go with the open source angle as one of the opportunities to actually align the talent, align the capabilities, and actually do shared infrastructure. I mean, maybe I’ll draw two analogies to think about, and then we can go more deep into those as it applies to AI. But for all practical purposes, every computer on the planet runs Linux. There are a few iPhones here and there on top of it. But the Linux model, I think, is a good one for all of us to think about, that every computer… Every country, every nation in the world, almost every company in the world, contributes to the single code base which has been deployed across these billions of computing devices across the planet.

And there are lots of derivative work that happens from it. So like a company like Google can then take that and make it into Android. A company like a vending machine company can deploy Linux onto a Raspberry Pi and run inside their vending machine. So I think there’s an analogy here of being able to use shared infrastructure, shared software infrastructure as a collaboration mechanism that we can all pool resources together but still have sovereignty on top of it. So we can still all be contributing to this common core but then fine -tune our way to our own particular implementations. And I think that if we take that and then marry it with a web analogy of in the early 90s of the original web, you needed to ask for permission in order to deploy a website.

And by permission I mean effectively you had to go buy yourself a Solaris box and then you had to go buy yourself you had to buy yourself a Windows NT. server, you’re trying to configure an ActiveX scenario. And the beauty of what Mozilla and Firefox did, we’re not the only ones who did it, but the beauty of what they did there is a forced openness throughout the stack that enabled anyone without permission to build whatever they wanted. And I think we need to find a similar moment. So in that world, we went from the Windows NT stack and all of IIS to the LAMP stack. And the LAMP stack has these gorgeous analogies of just like anyone can build on Linux.

When Facebook needed PHP to move faster, they did massive improvements on PHP, which then trickled down to all of us. So people can contribute in different ways across it. That’s not the world we’re currently living in AI. We’re living in this world where there are a few frontier model companies that are effectively doing governance for all of us in some way, shape, or form. And I agree with my colleague that that’s an untenable situation. I do live in San Francisco, but you don’t want four people in San Francisco. government’s decisions for the entire world that doesn’t make a lot of sense. So I do think if we can find the LAMP stack equivalent model for AI, and this is actually what I’ve turned all of Mozilla towards, of just like how do we define open standards, how do we define open interfaces so that the vibrancy of the open source community can come together and actually build solutions that work for every single person, every single community, every single government on the planet.

You can sort of build upon, you can contribute to the common base, but then build upon it and take it in a way that makes it more aligned with your country’s values or your company’s values or your individual values, and you can fine -tune your solution out of that. So I think there is an analogy here around how open source could actually provide digital sovereignty across all the different levels. Give us agency as a person, give opportunities for flexibility at a corporation level, and then give. Give countries the ability to own their version of the stack. That could actually be quite beautiful if we can actually figure out how to do that in an appropriate way.

Sabina Chofu

I tried to give you a dose of optimism you have given me a dose of optimism but I’m absolutely shocked you talked about open source thanks so much Ralphie and I did appreciate you brought up the standards because I’m going to talk to Halak and we’re going to go a bit into collaboration and standards here so obviously with the myriad of AI governance frameworks I’m going to turn to you on the question of where do you see potential for alignment on standards maybe some interoperability some maybe risk management framework so keep us on the hopeful path please

Halak Shirastava

I am here to provide the hopeful perspective let me start out by saying that I lead global public policy at Cohere. Cohere is a Canadian AI developer we build models and we have agentic AI our solution is called North so in my role I look across the global regulatory framework that means If our startup wants to, you know, do business in a certain country, I try to understand the regulatory landscape of that country, and then I advise our company if it’s favorable or not. When we’re talking about governance and frameworks that are existing, my perspective is I think it’s not there yet, but I have a more promising view of it. I think that in certain principles, we are converging to where we need to go, and there are strong opportunities.

Technical standards is one of them. You know, there are frameworks like NIST and ISO frameworks. For startups, these are key. The reason they’re key is because they’re flexible and they’re evolving. If we just go country by country, what that’s going to do is price out smaller companies. But if we have an international framework that is evolving and flexible and, you know, we’re going to be able to do that, you know, also including industry coalition, which a lot of the model developers are a part of. But also, like, other stakeholders can be a part of as well. I think it really helps. The second thing I would say is around shared practices, around risk mitigation. So I think there’s strong opportunity there as we come together and share documents or, you know, evaluations around misuse or model capabilities or impact of models.

I think, you know, like I said, we have a way to go, but we are moving closer to that. And then the third thing I would say is interoperability of shared resources. This is key, key, key. We have a big ecosystem. So, yes, there is big tech involved, but there are smaller players. And every single day there’s new startups that are wanting to emerge and wanting to have a go -to -market strategy. And the only way this is possible is if industry and all of industry, big and small, the whole ecosystem starts sharing documents and documentation around, you know, red teaming or evals or multilingual benchmarks and things like that to come to some sort of consensus.

Sabina Chofu

Thanks so much. I’m really enjoying this positive vibe we’re going with. And, you know, that combination, I think it kind of links really nicely back to what Bella was saying around coalitions, you know, build on themes, right? It’s like where do we think we have common ground and what we think we can build on. So I really, really enjoyed that contribution. Rajesh, can I turn to you next? Because I did wonder what all this stuff means for, you know, in kind of smaller and developing economies. And maybe if you have any examples of shared standards, pooled resources, any of the stuff that Halak was talking about, public -private models, or anything that you’ve seen that looks promising, that looks like it could deliver.

Thank you.

Rajesh Nambia

You know, as we said, the moment you look at shade models, there are multiple reasons why we want to do this. And one, of course, as we’ve talked about, the cost involved in doing some of that. I think that itself is becoming cost prohibitive and hence there may not be even an option for many of the countries but to sort of have this shade model. We also find that in the regional compute consortiums that, you know, folks can potentially create and you often see examples of where, for example, a standard data set and stuff like that being shared by, not just by, you know, even within a country. It could be between government, academia, and then industry sort of sharing the same sort of data sets, making sure that they’re able to leverage that in some sense.

Compute is clearly something which continues. continues to be the, you know, we shared resources in many of the, even in India, for example, you know, our own AI mission has created this cluster where it can be broadly leveraged by both industry, academia, and the government in terms of ensuring that they’re able to get access to the right set of GPU, set of GPU, GPU forms, and ensuring that they’re able to use that and then take it forward. So, public -private sharing of data, certainly the compute consortium, and then cloud credits, I think that’s something which sovereigns have been able to work with the hyperscalers, especially in terms of getting a lot of, you know, cloud credits for the GPUs, especially, right, because, which is needed for even if you, it’s not about building a frontier model, but it’s even to leverage the frontier model, build some reasoning models on top of it, and ensuring that you’re able to build an application which is meaningful, not that every time you need a powerful GPU, but there are occasions where you definitely you would need and then hence you know using some of those cloud credits will become a big need and then of course when you switch to regulations and so on and ensuring that how do you make sure that even having a policy is something which is shared across you know you don’t want to reinvent the wheel every single time so do you have a method by which you could leverage the existing you know look at what is out there in the world and then sort of leverage it and then try to reuse it because what you don’t want is to have this 100 versions of the same thing with a few nuances here and there so that’s something which I think companies will try and create a model as well.

Sabina Chofu

Thank you so much and I’m gonna kind of turn over to

Audience

Yes. Yes. Looking forward to a truth transparency and accountability -driven world. It takes 30 years for FC files to come out in a place like America, the developed world. Is that the speed of the system till it collapses and till we start a new world? Are we resigned to that fate?

Sabina Chofu

Yeah, so I can’t really see the link between the Epstein files and the… 30 years since the world was destroyed by Aaron Mulder in 2001. You don’t do the truth to come out. So you don’t have the system speed. Yes. Sure. Thank you. So on… Just to kind of build on what Rajesh was saying there on kind of also the capability. So maybe if we move into a bit of cross -border cooperation and Bella, if I can maybe turn to you just to build on those points. Because obviously what we are seeing across the developing world in particular, often it’s kind of the institutional capacity that’s a bit of an issue there. Yeah. kind of doing all the engagement and all the investments and all the, you know, you kind of still run into.

What are some of the, and I saw you were taking notes furiously, so I’m sure you have reflections on what has been said so far. But also, what are some of the resources

Bella Wilkinson

dependencies, figure out what they want to invest in and what dependencies they’re willing to accept, wanting to build strong institutions, again, that can mainline AI directly into public service delivery, and as you said, enable cross -border cooperation, might take a step back and figure out which foreign capabilities or foreign services they’re willing to accept at some levels of the stack and where they’d like to invest in indigenous solutions. And I mentioned open source earlier because this has come up time and time again, and I’m sure it’s going to be absolutely no surprise to our audience here today. An example which has really stuck with me and Rafi, I’d be really interested in your thoughts on this, is the Southeast Asian Languages Under One Network model, so a multilingual sea lion LLM.

And this is something we’ve called, again, in a really interesting collaboration with AI Safety Asia, open models. With local adaptation, really balancing. again inputs from open source models potentially provided by foreign providers with adaptation to a local context and so i think leaving the summit what i’m really going to be interested in is i think this connection between drawing on i guess inputs from the open source community fine -tuning and locally adapting their contributions and then perhaps doing so not only in the service of again strong robust institutions at the national level who are ai ready but also on this kind of collective cross -border level i hope that makes sense

Sabina Chofu

it does and i’m gonna let rafi kind of uh fit into that as well because you’ve uh you’ve uh segwayed really nicely into into his uh part but also um if you can also touch upon feel free to react to what uh bella has said but also if you can also touch upon on on the what you’ve seen as best practice in international and cross -border collaboration maybe in healthcare climate resilience audience education anywhere you’ve seen good stories to tell please do share

Rafik Rikorian

i mean i do think a lot about the local fine -tuning and i think that that’s actually a really powerful concept of like we can all contribute to a core and then locally fine -tune for our values and our needs and i think that this has shown up in a bunch of different ways and i’m interested personally in all these alternative i don’t even want to call it our alternative but like other architectures that enable this to be possible because in some ways we’re kind of being being fed a regime that says it’s not possible but i think like architecturally it actually is in a bunch of different ways so i love the indigenous data model like looking at what different indigenous peoples have done around data collectives for their local areas so there’s a group of people for example in hawaii that is doing this for their genomic data because genomic data is really useful for pharmaceutical models and so like they’ve been looking for ways so that they can both monetize but also the provenance of their data as it goes through these pharmaceutical models.

So there are some professors out of UCSD starting to build actually what these data trusts could look like for Hawaiian people, and I think that that model could be replicated in lots of different parts of the world. Mozilla is actually attempting to do a bunch of this. So we’re creating something that we call the Mozilla Data Collaborative, or Collective, sorry. And what the Collective is meant to be is it’s meant to be a marketplace of ethically sourced but provenance -traced data sets so that you can bring your data. It will actually help you scrub it, clean it, et cetera, and also make sure you have the appropriate licenses on it so that people can come find the data sets that they want to train their models but make sure that attribution is given, compensation is given, et cetera.

So we’re literally in conversations with almost every radio station on the planet to try to get their recordings and their transcripts onto the marketplace, not for Mozilla to make money. In fact, we actually want the radio stations. to have a monetization path for all the data that they’re sitting on. simply have it scraped by big model providers to try to soak that into their systems. Instead, require that it be licensed, require that compensation be given. So I think there are models there. And on the computational side, I think there’s also a lot of interesting things showing up around federated learning opportunities. For those of you who don’t know what federated learning is, think of, you know, Google did this very famously when they trained their handwriting model across everyone’s Android phones.

So your handwriting is very personal and private. Your handwriting is on your device. And Google is able to train a handwriting recognition model that didn’t require them to get access to your data, because part of the training happened on your phone, and then the model wait through shipped it back up for centralized training. And I think something like that actually could be an interesting model for international collaboration of like, I can bring my data to the game, my healthcare data, my values data, my language data, but not have to release it to a different company, or sorry, a different country. Instead, allow you to do it in a different way. And I think that’s a really interesting model.

Thank you. of the training on my compute, on my infrastructure, and only ship model weights back up, and actually then create bigger models across borders and across geographies that could actually take into account different healthcare scenarios, different value systems, et cetera, in there. So I think that there are these interesting alternative architectures that we can actually start leaning into, these data trust models, these federated learning models, that actually could be massive enablers for cooperation and allow us to build these foundational things that we can then fine -tune and bring to our local context.

Sabina Chofu

Thanks so much, Rafi. That fine -tuning seems to be definitely a thing in this conversation, how you kind of built for different cultures and countries. And Halak, maybe I can come to you next, because we keep talking about kind of international cooperation and coordination. But I’m wondering, how do you translate that? played that, you know, chit -chat into actual skills, capacity, capability for emerging economies. And, you know, I mean, we are in a very international AI Impact Summit. So, you know, kind of how do we bring that from we talk about governance to all this international policy actually delivering for emerging economies?

Halak Shirastava

It’s a good question. I think let me start out by saying capacity building isn’t just, you know, running workshops or basically talking to regulators about, you know, this should be done. What it is is capacity building, I think, for emerging economies especially is critical because – hold on. Let me think. Okay, so emerging economies have unequal access to data, information, and technology, right? So what are we trying to solve for here in terms of capacity building? The first thing I would say is shared evidence. So what we need is we need players to help into this capacity building system with documents, results, performance, benchmarks, to lift up other players. That, I think, would be number one.

The second thing I think is key and sometimes overlooked is the value of, like, procurement policies. And I agree with Isabella. What if we had, like, an industry coalition, like a cross -border network, where they’re solving for procurement policies or procurement rules? And what this does is this brings in global players. So now what you’re doing is you’re opening up your country to different markets. The next thing I would say is, like, you know, a lot of – Let me think. Let me put it this way. So there are developers who develop the technology, and then there are deployers. They buy the technology, and they use it. So, for example, like a public sector agency. Why is it so – Thank you.

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. you you Thank you. Thank you. economist Frank Nagel has a report recently that approximately 24 billion U .S. dollars are being wasted by not switching to open source models right now. So the economics are starting to make a lot of sense. So I think once all these stars align, it becomes almost obvious what an answer could look like for local governments around open source AI models, et cetera. So I’m really excited for that in the next 12 to 18 months.

Sabina Chofu

Thank you. Rajesh?

Rajesh Nambia

No, I agree with both of what’s been said so far, but I also want to give a sense of I think it’s when you look at AI governance and people tend to sort of lead with regulatory regulation first. I believe that countries and especially the countries which we talked about in terms of more from an inclusion point of view, you’ve got to lead with innovation first mindset because I think regulation is required and certainly needed, but I think innovation is probably needed more in some sense. And also when you look at the AI governance, governance across all of that. we do while there could be horizontal governance which will apply to every AI systems I think the more meaningful governance that you’re going to find when you get into sectoral governance meaning when you look at the AI systems for health care and you’ll find there are the understanding of a harm in the health care segment is very different from financial services and so on so how do you get into those sectoral areas then you can have meaningful governance structure and last but not the least you need to have the right talent and people who can actually who understand all of this in both in public sector and people who are supposedly governing all of this that is something which is it’s not the talent in terms of broader AI model building and people who are building AI systems but how do you make sure that the talent in the governance space in the governments and people who are actually regulating it they don’t understand the real harm and then it’s going to be a bigger issue and especially when it comes to the you know the list of countries that we talked about always when you get deeper down the list you will find that the talent is going to be here and there and you will find that the talent is going to be here and there and you will find that be here and there and you will find that the talent is going to be here and there and you will find that the talent is going to be here and there and you will find that the talent issue in terms of understanding.

Sabina Chofu

Thank you. And, you know, as someone who lives in Brussels, I’ll make sure to take that message back. Halak.

Halak Shirastava

Okay, so what am I most excited about, I guess, in the next 12 months? I mean, in the last few days, you’ve seen companies really, really excited about AI, but what you’ve also seen is countries very excited about AI. So what does this mean in governance? It means that the community and the participation is only going to increase. I don’t see it going backwards. And so, as technology is evolving, more players are going to have a voice in the system, and the standards and the ITU bodies or the ISO bodies, and I think because of this convergence, we are going to, as society, just, like, increase our, like, literacy of not only AI, but technology, but also bring it into whatever we’re in, if we’re in the private sector, if we’re in the public sector.

And because of that, I think we’re going to have to Yeah, I think a lot of progress will be made in the next 12 months, and you’ll see it as it converges.

Sabina Chofu

Thank you so much. Thanks to all the panel. Thanks for being here, and enjoy the rest of your day. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (32)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“Sabina Chofu is the International Policy and Strategy Lead at TechUK, and TechUK is the sister association of NASCOM in the UK.”

The knowledge base lists Sabina Chofu as International Policy and Strategy Lead at TechUK and notes that TechUK is the sister association of NASCOM in the UK, confirming the report’s statement.

Additional Contextmedium

“India is creating public‑private compute consortia, shared GPU clusters such as the AI Mission compute cluster, and cloud‑credit schemes from hyperscalers to provide AI resources without each country having to build a frontier model from scratch.”

A recent Indian white‑paper described a national push to democratise AI infrastructure, treating compute, datasets and models as digital public goods and encouraging shared resources, which adds context to the reported compute‑consortium initiatives.

Additional Contextmedium

“The emerging “AI divide” is larger than the earlier digital divide because it concerns both agency and access.”

Discussion in the knowledge base about policy levers to bridge the AI divide highlights that the divide now encompasses issues of agency and access beyond the traditional digital‑access gap, providing additional nuance to the claim.

External Sources (110)
S1
Smart Regulation Rightsizing Governance for the AI Revolution — -Rajesh Nambia- President of NASCOM (National Association of Software and Service Companies in India)
S2
Smart Regulation Rightsizing Governance for the AI Revolution — Halak Shirastava from Cohere brought a private sector perspective emphasizing the practical importance of technical stan…
S3
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S4
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S5
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S6
Smart Regulation Rightsizing Governance for the AI Revolution — – Bella Wilkinson- Rafik Rikorian – Bella Wilkinson- Sabina Chofu
S7
Smart Regulation Rightsizing Governance for the AI Revolution — -Sabina Chofu- International Policy and Strategy Lead at TechUK (sister association of NASCOM in the UK)
S8
https://dig.watch/event/india-ai-impact-summit-2026/smart-regulation-rightsizing-governance-for-the-ai-revolution — But we do have… What I start with, just next to me here, Bella Wilkinson, who’s a research fellow on the Digital Socie…
S9
Smart Regulation Rightsizing Governance for the AI Revolution — -Rafik Rikorian- Chief Technology Officer for Mozilla
S10
https://dig.watch/event/india-ai-impact-summit-2026/smart-regulation-rightsizing-governance-for-the-ai-revolution — But we do have… What I start with, just next to me here, Bella Wilkinson, who’s a research fellow on the Digital Socie…
S11
What is it about AI that we need to regulate? — The Role of International Institutions in Setting Norms for Advanced TechnologiesThe discussions across IGF 2025 session…
S12
UNSC meeting: Regional arrangements for peace — Austia:Thank you, Mr. President, and thank you for organizing this open debate. As we move closer to the summit of the f…
S13
(Interactive Dialogue 2) Summit of the Future – General Assembly, 79th session — International Criminal Police Organization Interpol: Excellencies, ladies and gentlemen, we gather at a time when organ…
S14
Leaders TalkX: Partnership pivot: rethinking cooperation in the digital era — This comment elevated the entire discussion by acknowledging the elephant in the room – geopolitical tensions – while re…
S15
Day 0 Event #150 Digital Rights in Partnership Strategies for Impact — ## Accountability Mechanisms and Transparency The panellists identified several approaches to accountability, though th…
S16
Cutting through Cyber Complexity / DAVOS 2025 — Current regulation processes are too slow compared to the speed at which cyber attacks can occur and cause massive disru…
S17
Artificial Intelligence & Emerging Tech — Jennifer Chung:Thank you, Nazar. I actually do see two more questions from the Bangladesh Remote Hub. This is good. This…
S18
AI as critical infrastructure for continuity in public services — Data silos, lack of governance and insufficient data quality cause most pilots to stall before production. Without prope…
S19
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — And the big mindset shift that’s starting to occur is this notion that, you know, these aren’t just productivity tools. …
S20
WS #208 Democratising Access to AI with Open Source LLMs — The speaker mentions the need for GPU infrastructure and the high costs associated with it.
S21
https://dig.watch/event/india-ai-impact-summit-2026/building-public-interest-ai-catalytic-funding-for-equitable-compute-access — to be with us, so thank you. We are here because we believe in AI’s transformative potential, and I’m certain you’ve hea…
S22
Building Public Interest AI Catalytic Funding for Equitable Compute Access — And here, India is not waiting for permission. India is not waiting for permission. India is showing that it can be done…
S23
Global Perspectives on Openness and Trust in AI — And then exclusive partnerships and the systems being opaque. So those were the things identified in the market study. A…
S24
Driving Indias AI Future Growth Innovation and Impact — The innovate side really comes down to. Areas like skilling, which I know when Minister Chaudhry joins us, we will get i…
S25
Balancing innovation and oversight: AI’s future requires shared governance — At IGF 2024, day two in Riyadh, policymakers, tech experts, and corporate leaders discussed one of the most pressing dil…
S26
Conversation: 01 — Regulation perspective depends on each country’s development stage – countries should innovate first before heavily regu…
S27
How to make AI governance fit for purpose? — Innovation should be prioritized over excessive regulation
S28
Laying the foundations for AI governance — This discussion revealed both the substantial challenges in translating AI governance principles into practice and the s…
S29
Leveraging the UN system to advance global AI Governance efforts — The current difficulties in achieving consensus in multilateral systems underscore the necessity for inclusive negotiati…
S30
Main Session | Policy Network on Artificial Intelligence — The discussion highlighted the complex and multifaceted nature of AI governance challenges. While there was broad agreem…
S31
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — Speaker:It’s a hard question and also for the invitation to be part of this panel, I’m very glad to be here. I’m Vladimi…
S32
Diplomatic policy analysis — Digital divides:Not all countries have equal access to advanced analytical tools, perpetuating inequalities in diplomati…
S33
What policy levers can bridge the AI divide? — ## Forward-Looking Perspectives ## Key Challenges and Opportunities Lacina Kone: Before talking about the bridging of …
S34
A view on digital divide and economic development — Hence, even thoughICTs provide opportunities for economic growth and social development, they have the potential to excl…
S35
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — Since its adoption in May 2019, 48 countries and the European Union have adhered to the OECD Principles on Artificial In…
S36
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — The level of consensus among the speakers was relatively high, particularly on the benefits and potential applications o…
S37
AI Development Beyond Scaling: Panel Discussion Report — Choi advocates for AI democratization where AI reflects human knowledge and values, serves all humans rather than just t…
S38
Responsible AI for Shared Prosperity — The balance between open-source development and community sovereignty presents ongoing challenges. While open-source app…
S39
Better governance for fairer digital markets: unlocking the innovation potential and leveling the playing field (UNCTAD) — Interoperability emerges as a potential solution to prevent the monopolisation of functions and data by large tech compa…
S40
Closing remarks – Charting the path forward — Coherent and interoperable policy frameworks are needed to prevent fragmentation while enabling agile governance
S41
Welcome 2015 ‒ a year of cyber(in)security — Developing institutional and professional capacities is recognised in various forums as a precondition for successful im…
S42
Opening of the session — Capacity building should extend beyond the implementation of voluntary norms.
S43
Artificial intelligence — Despite their technical nature – or rather because of that – standards have an important role to play in bridging techno…
S44
Setting the Rules_ Global AI Standards for Growth and Governance — And it doesn’t have to be the frontier model labs only. It could be app developers and so on. A way to differentiate the…
S45
AI Meets Agriculture Building Food Security and Climate Resilien — “AI must be transparent, auditable, and explainable”[96]. “Without trust, scale will not happen”[99]. “based on open sta…
S46
Opening address of the co-chairs of the AI Governance Dialogue — 3. Establishing international technical standards that allow policy and regulation to remain flexible and agile Tomas L…
S47
WS #162 Overregulation: Balance Policy and Innovation in Technology — 3. Context-Specific Regulation James Nathan Adjartey Amattey: So thank you very much, Nicolas, for that introduction. …
S48
WS #214 Youth-Led Digital Futures: Integrating Perspectives and Governance — – Balancing innovation with regulation for emerging technologies Keith Andere stressed the importance of harmonization …
S49
How AI Drives Innovation and Economic Growth — And when I say incumbents, those firms that have more than 1 ,000 employees. In around 2000, 50 % of employees used to w…
S50
Emerging Markets: Resilience, Innovation, and the Future of Global Development — Countries should focus on collecting and sequencing their genetic and biodiversity data as valuable assets for future bi…
S51
Smart Regulation Rightsizing Governance for the AI Revolution — This comment is deeply insightful because it cuts through the optimistic summit rhetoric to present a stark geopolitical…
S52
Laying the foundations for AI governance — This comment introduced a different geopolitical perspective that complicated the discussion in important ways. While it…
S53
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — Bremmer argues that the rapid pace of AI development is outstripping the ability of governments and international instit…
S54
Building Sovereign and Responsible AI Beyond Proof of Concepts — Sovereignty challenges have become increasingly prominent, particularly given current geopolitical tensions. Questions a…
S55
Development of Cyber capacities in emerging economies | IGF 2023 Open Forum #6 — This Open Forum follows the dialogue already opened in the workshop at the WSIS Forum 2023 “Cybersecurity and cyber resi…
S56
What is it about AI that we need to regulate? — The Role of International Institutions in Setting Norms for Advanced TechnologiesThe discussions across IGF 2025 session…
S57
Comprehensive Discussion Report: The Future of Artificial General Intelligence — Current geopolitical tensions and adversarial relationships between major powers make scientific cooperation proposals u…
S58
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — Since its adoption in May 2019, 48 countries and the European Union have adhered to the OECD Principles on Artificial In…
S59
WS #208 Democratising Access to AI with Open Source LLMs — Developing countries face challenges in implementing open source AI due to limited infrastructure and technical expertis…
S60
Upskilling for the AI era: Education’s next revolution — The coalition’s approach prioritises accessibility and inclusion, with particular focus on reaching underserved and marg…
S61
Global AI Policy Framework: International Cooperation and Historical Perspectives — So global principles are very important, but implementation must account for national contexts and capacities, as you we…
S62
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — In conclusion, the UNESCO recommendation on AI ethics provides crucial guidance for global AI governance. By grounding A…
S63
How to make AI governance fit for purpose? — All speakers recognize that AI’s global nature requires international cooperation and coordination, though they may diff…
S64
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The level of disagreement is moderate but significant for implementation. While speakers share fundamental goals of resp…
S65
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S66
Building Public Interest AI Catalytic Funding for Equitable Compute Access — And here, India is not waiting for permission. India is not waiting for permission. India is showing that it can be done…
S67
Global Perspectives on Openness and Trust in AI — And then exclusive partnerships and the systems being opaque. So those were the things identified in the market study. A…
S68
Revitalising trust with AI: Boosting governance and public services — AI is reshaping public governance, offering innovative ways to enhance services and restore trust in institutions. The d…
S69
Laying the foundations for AI governance — This discussion revealed both the substantial challenges in translating AI governance principles into practice and the s…
S70
Main Session | Policy Network on Artificial Intelligence — The discussion highlighted the complex and multifaceted nature of AI governance challenges. While there was broad agreem…
S71
Chinese leading AI expert argues for AI governance by the UN — The rapid development of AI technology has outpaced existing regulatory frameworks, creating challenges in areas such as…
S72
Smart Regulation Rightsizing Governance for the AI Revolution — This comment is deeply insightful because it cuts through the optimistic summit rhetoric to present a stark geopolitical…
S73
What policy levers can bridge the AI divide? — ## Key Challenges and Opportunities Lacina Kone: Before talking about the bridging of AI, bridging the gap of the AI, t…
S74
Bridging the Digital Divide for Transition to a Greener Economy — Mehmed Sait Akman:Thank you very much. Let me express my thank you very much again and for your kind invitation to this …
S75
WS #462 Bridging the Compute Divide a Global Alliance for AI — The discussion revealed that the challenge extends beyond inequitable distribution to an overall supply-demand gap affec…
S76
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — We deeply appreciate the kind hospitality we have received this week in India at the India AI Impact Summit. Costa Rica …
S77
A view on digital divide and economic development — Hence, even thoughICTs provide opportunities for economic growth and social development, they have the potential to excl…
S78
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — The level of consensus among the speakers was relatively high, particularly on the benefits and potential applications o…
S79
Responsible AI for Shared Prosperity — The balance between open-source development and community sovereignty presents ongoing challenges. While open-source app…
S80
AI Development Beyond Scaling: Panel Discussion Report — Choi advocates for AI democratization where AI reflects human knowledge and values, serves all humans rather than just t…
S81
Better governance for fairer digital markets: unlocking the innovation potential and leveling the playing field (UNCTAD) — Access to open markets through regulation is highlighted as beneficial for small messaging companies. This provides oppo…
S82
Setting the Rules_ Global AI Standards for Growth and Governance — Etienne Chaponniere from Qualcomm brought a unique perspective as a chipset provider, emphasising the democratising pote…
S83
Omnipresent Smart Wireless: Deploying Future Networks at Scale — Harmonization between stakeholders is essential for the successful deployment of 6G. Standardization, scalability, and i…
S84
Welcome 2015 ‒ a year of cyber(in)security — Developing institutional and professional capacities is recognised in various forums as a precondition for successful im…
S85
Dynamic Coalition Collaborative Session — Dr. Muhammad Shabbir: Thank you very much, Rajendra, and thank you very much to my colleagues who have spoken before me….
S86
Agenda item 6: other matters — Japan: Thank you, Mr. Chair. Japan believes that capacity building is essential to maintaining peace and stability and…
S87
WS #187 Bridging Internet AI Governance From Theory to Practice — The discussion maintained a thoughtful but increasingly cautious tone throughout. It began optimistically, with speakers…
S88
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — The discussion maintained a thoughtful but somewhat cautious tone throughout, with speakers acknowledging both opportuni…
S89
Defying Cognitive Atrophy in the Age of AI: A World Economic Forum Stakeholder Dialogue — The discussion began with a cautiously optimistic tone, acknowledging both opportunities and risks. However, the tone be…
S90
What is it about AI that we need to regulate? — The Role of International Institutions in Setting Norms for Advanced TechnologiesThe discussions across IGF 2025 session…
S91
Charting New Horizons: Gender Equality in Supply Chains – Challenges and Opportunities — A conscientious request for clarity and specificity was also apparent, underlining the need for concrete, actionable pla…
S92
Leaders TalkX: Digital Advancing Sustainable Development: A Trusted Connected World — The unwaveringly positive sentiment underlines a strong conviction in the potential of collective and inclusive efforts …
S93
Open Forum #44 Building Trust with Technical Standards and Human Rights — The tone was largely collaborative and solution-oriented. Speakers approached the topic from different perspectives but …
S94
Summit Opening Session — The declaration was developed through an inclusive consultation process within the International Advisory Body on Submar…
S95
Open Forum #52 Strengthening Information Integrity Through Coalitions — The discussion maintained a professional and collaborative tone throughout, characterized by urgency about the scale of …
S96
Powering the Technology Revolution / Davos 2025 — The tone was generally optimistic and forward-looking, with panelists highlighting opportunities for innovation and prog…
S97
Next-Gen Industrial Infrastructure / Davos 2025 — The tone was largely optimistic and forward-looking, with speakers enthusiastically sharing their visions and initiative…
S98
Building Population-Scale Digital Public Infrastructure for AI — The tone is optimistic and collaborative throughout, with speakers sharing concrete examples of successful implementatio…
S99
Governments, Rewired / Davos 2025 — The overall tone was optimistic and forward-looking, with speakers highlighting the transformative potential of technolo…
S100
Flexibility 2.0 / Davos 2025 — The panel discussion provided a comprehensive exploration of the gig economy’s impact on the future of work. While ackno…
S101
Opening of the session — – Addressing the technological divide between developed and developing countries Chair: Thank you very much, Belgium, …
S102
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-empowerment_-driving-change-and-inclusion — And it’s impossible to regulate this. It’s impossible to regulate this because it’s everywhere. So the only way we are a…
S103
https://dig.watch/event/india-ai-impact-summit-2026/ai-that-empowers-safety-growth-and-social-inclusion-in-action-2 — I mean, the high impact use case can have more investment, more focus versus a low risk, right? I think that’s the first…
S104
International Cooperation for AI & Digital Governance | IGF 2023 Networking Session #109 — This comprehensive approach aims to understand the societal, economic, and ethical aspects of this advancing technology….
S105
Webinar session — Vera Toro argues that achieving consensus during a period when multilateralism faces widespread questioning serves as im…
S106
The Geopolitics of Materials: Critical Mineral Supply Chains and Global Competition — Mosako argues that development finance institutions like hers can bridge the gap between regions with different comparat…
S107
AI race shows diverging paths for China and the US — The US administration’s new AI action plan frames global development as anAI racewith a single winner. Officials argue A…
S108
Africa’s Prospects in the New Global Economy: A Comprehensive Analysis from Davos — Johann Jurie Strydom from Old Mutual highlighted opportunities for financial inclusion through digital platforms, noting…
S109
TradeTech’s Trillion-Dollar Promise — Barriers on data and technology side affect emerging economies harder. The inability to connect and create necessary in…
S110
New plan outlines how India will democratise AI infrastructure — Indiais moving to rebalance access to AI infrastructureas part of a new national push to close gaps in computing power a…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
B
Bella Wilkinson
2 arguments155 words per minute979 words377 seconds
Argument 1
Global consensus on AI governance is unrealistic; focus on partial alignment via issue‑specific coalitions
EXPLANATION
Bella argues that achieving worldwide agreement on AI rules is impossible in the current geopolitical climate. Instead, she suggests concentrating on limited, issue‑focused coalitions that can later be scaled through multilateral formats.
EVIDENCE
She states that “Global consensus on how to govern AI is a no-go” and that “partial alignment on priority issue areas is possible” and recommends supporting smaller gatherings that can be scaled later [26-28].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bella’s view that worldwide AI consensus is a “no-go” and that issue-specific coalitions are feasible is echoed in the Smart Regulation commentary (compute and coalition challenges) [S1], the discussion on how to bring minimally aligned states together via coalitions [S8], and the trust-building partnership pivot noted in the Leaders TalkX summary [S14].
MAJOR DISCUSSION POINT
Realism of Global AI Governance and Need for Coalitions
AGREED WITH
Sabina Chofu
DISAGREED WITH
Rajesh Nambia
Argument 2
Multilateral institutions can act as brokers, but trusted mechanisms are needed to bring rivals together
EXPLANATION
Bella notes that traditional multilateral bodies have historically mediated complex governance issues, but today they lack the trust needed to convene competing powers. She proposes building coalitions around trusted mechanisms to overcome this gap.
EVIDENCE
She explains that “multilateral institutions in the past have been brokers, mediators and scalers of consensus” but now the challenge is “how on earth do we bring rivals and competitors around the same table?” and suggests coalition building around trusted approaches [36-39].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of international institutions as brokers and the need for trusted mechanisms are highlighted in the analysis of norm-setting bodies for advanced technologies [S11], the question of convening rival states through trusted approaches [S8], and the emphasis on trust-building partnerships in the Leaders TalkX report [S14].
MAJOR DISCUSSION POINT
Realism of Global AI Governance and Need for Coalitions
S
Sabina Chofu
1 argument142 words per minute1209 words508 seconds
Argument 1
Coalition building is the most pragmatic path given current geopolitical tensions
EXPLANATION
Sabina agrees that the world lacks the multilateral cooperation needed for AI governance and highlights coalition building as the realistic way forward. She frames it as the best hope under present conditions.
EVIDENCE
She says, “you have talked about coalition building, and it’s probably the best we can hope for in the world as it is” after Bella’s remarks [48-50].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sabina’s endorsement of coalition building aligns with the Smart Regulation commentary that records the remark “you have talked about coalition building, and it’s probably the best we can hope for” [S1].
MAJOR DISCUSSION POINT
Realism of Global AI Governance and Need for Coalitions
AGREED WITH
Bella Wilkinson
A
Audience
1 argument122 words per minute53 words26 seconds
Argument 1
Current transparency and accountability processes are too slow, demanding faster mechanisms
EXPLANATION
An audience member points out that existing transparency mechanisms, such as the release of investigative files, take decades, which is unacceptable for rapidly evolving AI risks. They call for a speedier system to avoid systemic collapse.
EVIDENCE
The audience remarks, “It takes 30 years for FC files to come out in a place like America… Is that the speed of the system till it collapses?” highlighting the slowness of current processes [135-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Audience concerns about slow filing cycles are documented in the Smart Regulation piece (30-year delays) [S1], reinforced by the Day 0 accountability mechanisms discussion [S15], and by the DAVOS observation that regulation lags behind fast-moving cyber threats [S16].
MAJOR DISCUSSION POINT
Realism of Global AI Governance and Need for Coalitions
R
Rajesh Nambia
7 arguments195 words per minute1953 words598 seconds
Argument 1
Severe compute access gap hampers AI development in smaller and developing economies
EXPLANATION
Rajesh emphasizes that limited access to high‑performance compute resources puts developing nations at a significant disadvantage compared with the US and China. He warns that without shared or pooled compute, these economies will fall further behind.
EVIDENCE
He describes the “limited access to the broader compute facility” as a major barrier for smaller countries and stresses the need for pooling resources [57-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Rajesh’s point on compute scarcity is supported by the Smart Regulation analysis of limited compute for smaller economies [S1], the high GPU cost barrier noted in the open-source LLM session [S20], the observation of a global “compute divide” [S21], and India’s public GPU infrastructure example [S22].
MAJOR DISCUSSION POINT
Barriers to AI Adoption in Developing Nations
DISAGREED WITH
Bella Wilkinson
Argument 2
Data silos, poor data quality, and inadequate infrastructure (power, connectivity) limit AI potential
EXPLANATION
Rajesh points out that data in many developing regions is fragmented across government and departmental silos, often of low quality, and that unreliable power and connectivity further restrict AI projects. These factors together hinder the creation of representative AI models.
EVIDENCE
He notes that “the data itself is very siloed” and that “power is going to be a huge foundation” while also mentioning connectivity challenges despite satellite options [61-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The challenges of data silos and low-quality data are highlighted in the AI as critical infrastructure briefing [S18], while connectivity gaps in developing regions are discussed in the Emerging Tech Q&A [S17].
MAJOR DISCUSSION POINT
Barriers to AI Adoption in Developing Nations
Argument 3
High expense of GPU clusters and lack of clean power further exacerbate the divide
EXPLANATION
Rajesh argues that even when purchasing power parity is considered, the cost of assembling GPU clusters needed for meaningful models is prohibitive. Additionally, the requirement for clean energy adds extra financial and logistical burdens for developing economies.
EVIDENCE
He explains that “the expense of GPU clusters” is a “very different ballgame” and that “clean power” adds an additional tax for the developing world [59-60][66-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The prohibitive cost of GPU clusters is examined in the open-source LLM discussion on GPU expenses [S20], and the broader compute-divide analysis underscores financial hurdles for clean-power-dependent hardware [S21].
MAJOR DISCUSSION POINT
Barriers to AI Adoption in Developing Nations
Argument 4
Public‑private compute consortia and cloud‑credit programs can pool resources and give broader access to AI compute
EXPLANATION
Rajesh cites examples from India where government, academia, and industry share GPU resources through a national AI mission, and where cloud‑credit arrangements with hyperscalers help smaller players access compute without owning expensive hardware.
EVIDENCE
He mentions “our own AI mission has created this cluster” that is shared across sectors and notes that “sovereigns have been able to work with the hyperscalers… to get cloud credits for GPUs” [130-133].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Public-private consortia and cloud-credit schemes are exemplified by India’s AI mission and hyperscaler cloud-credit arrangements described in the equitable compute access briefing [S22].
MAJOR DISCUSSION POINT
Models for Cross‑Border Collaboration
AGREED WITH
Rafik Rikorian
Argument 5
Developing talent for both AI innovation and governance is essential for effective sectoral oversight
EXPLANATION
Rajesh stresses that countries need skilled personnel not only to build AI systems but also to understand and regulate them, especially in sector‑specific contexts such as health or finance. He warns that talent gaps will undermine governance effectiveness.
EVIDENCE
He says, “you need the right talent and people who can actually understand… both in public sector and people who are supposedly governing” and highlights the uneven talent distribution across countries [214-219].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of AI talent pipelines is emphasized in the India AI growth and skilling report [S24] and reinforced by the Smart Regulation note on uneven talent distribution across countries [S1].
MAJOR DISCUSSION POINT
Capacity Building and Skill Development for Emerging Economies
AGREED WITH
Halak Shirastava
Argument 6
Emerging economies should prioritize innovation and pilot projects before imposing heavy regulation
EXPLANATION
Rajesh argues that an innovation‑first mindset allows countries to build capacity and demonstrate value before layering restrictive regulations, which could otherwise stifle growth.
EVIDENCE
He states, “countries… have to lead with innovation first mindset because regulation is required but innovation is probably needed more” [213-215].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The innovation-first stance is advocated in the IGF balancing-innovation session [S25], the development-stage regulation perspective paper [S26], and the “innovation over excessive regulation” commentary [S27].
MAJOR DISCUSSION POINT
Innovation‑First Approach and Sector‑Specific Governance
DISAGREED WITH
Halak Shirastava
Argument 7
Sector‑specific governance (healthcare, finance, etc.) yields more meaningful oversight than blanket horizontal rules
EXPLANATION
Rajesh contends that AI risks differ across domains, so governance frameworks should be tailored to each sector rather than applying a one‑size‑fits‑all approach. This enables more precise risk mitigation and accountability.
EVIDENCE
He explains that “horizontal governance” is less meaningful than “sectoral governance” where harms differ, citing healthcare versus financial services [214-219].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The case for sector-tailored oversight versus horizontal rules is discussed in the IGF balancing-innovation session, which stresses domain-specific governance needs [S25].
MAJOR DISCUSSION POINT
Innovation‑First Approach and Sector‑Specific Governance
R
Rafik Rikorian
4 arguments189 words per minute1391 words439 seconds
Argument 1
An open‑source stack (Linux analogy) can provide shared AI infrastructure while preserving national sovereignty
EXPLANATION
Rafik draws on the Linux model, where a common code base underpins billions of devices, to illustrate how a shared AI stack could be collaboratively developed while allowing each nation to retain sovereign control over its implementation.
EVIDENCE
He explains that “every computer on the planet runs Linux” and that “every country… contributes to the single code base” while still being able to fine-tune their own versions [70-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Rafik’s Linux analogy aligns with the Global Perspectives on Openness and Trust report that highlights openness as a foundation for shared AI infrastructure while respecting sovereignty [S23], and with the open-source LLM discussion on shared stacks [S20].
MAJOR DISCUSSION POINT
Open Source, Standards, and Shared Infrastructure as Enablers
AGREED WITH
Halak Shirastava
DISAGREED WITH
Halak Shirastava
Argument 2
Developing open standards and interfaces enables global collaboration and digital sovereignty
EXPLANATION
Rafik argues that defining open standards and interfaces, similar to the transition from proprietary stacks to the LAMP stack, would let diverse actors contribute to a common foundation while customizing it to local values, thereby supporting digital sovereignty.
EVIDENCE
He discusses how “the LAMP stack” enabled openness, and calls for “open standards and open interfaces” to let countries own their version of the stack [91-96].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The push for open standards and interfaces is reinforced by the Global Perspectives on Openness report that calls for interoperable standards to foster collaboration [S23] and by the analysis of international norm-setting bodies that stress open standards for digital sovereignty [S11].
MAJOR DISCUSSION POINT
Open Source, Standards, and Shared Infrastructure as Enablers
AGREED WITH
Halak Shirastava
Argument 3
Data trusts offer ethically sourced, provenance‑tracked datasets for shared use and fair compensation
EXPLANATION
Rafik describes Mozilla’s Data Collaborative, a marketplace where data contributors retain provenance, licensing, and receive compensation, enabling ethically sourced datasets to be shared across AI developers.
EVIDENCE
He outlines the Mozilla Data Collaborative as “a marketplace of ethically sourced but provenance-tracked data sets” that ensures attribution and compensation, and mentions outreach to radio stations worldwide [160-166].
MAJOR DISCUSSION POINT
Models for Cross‑Border Collaboration
AGREED WITH
Rajesh Nambia
Argument 4
Federated learning allows cross‑border model training without exposing raw data, respecting privacy and sovereignty
EXPLANATION
Rafik explains federated learning as a technique where model training occurs locally on devices, with only aggregated model updates sent back, enabling collaboration across borders without sharing sensitive raw data.
EVIDENCE
He provides the example of Google’s handwriting model trained on phones, describing how “training happened on your phone” and only model weights were shipped back, preserving privacy [167-174].
MAJOR DISCUSSION POINT
Models for Cross‑Border Collaboration
H
Halak Shirastava
3 arguments69 words per minute931 words798 seconds
Argument 1
Sharing evidence, benchmarks, and best‑practice documents builds technical capacity across borders
EXPLANATION
Halak stresses that providing shared documentation—such as performance benchmarks and evaluation reports—helps lift less‑resourced actors and creates a common evidence base for capacity building.
EVIDENCE
She says, “we need players to help into this capacity building system with documents, results, performance, benchmarks, to lift up other players” [188-191].
MAJOR DISCUSSION POINT
Capacity Building and Skill Development for Emerging Economies
AGREED WITH
Rajesh Nambia
Argument 2
Capacity building must include procurement policy frameworks and open‑source adoption, not just workshops
EXPLANATION
Halak argues that effective capacity building should go beyond training sessions to incorporate procurement policies that enable access to open‑source AI tools and create cross‑border industry coalitions.
EVIDENCE
She notes that “the value of procurement policies” and proposes an “industry coalition… solving for procurement policies” to open markets for countries [191-196].
MAJOR DISCUSSION POINT
Capacity Building and Skill Development for Emerging Economies
AGREED WITH
Bella Wilkinson, Sabina Chofu
Argument 3
Evolving technical standards (NIST, ISO) provide flexible, international frameworks that can adapt to rapid AI advances
EXPLANATION
Halak points out that existing technical standards bodies such as NIST and ISO are developing adaptable frameworks that can accommodate fast‑moving AI technologies, offering a viable path for global alignment.
EVIDENCE
She references “technical standards” like NIST and ISO, describing them as “flexible and evolving” and useful for startups to avoid costly country-by-country compliance [102-106].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The relevance of adaptable technical standards is documented in the discussion of international standards bodies (NIST, ISO) offering flexible frameworks for AI governance [S11] and in the Global Perspectives report on evolving standards for AI [S23].
MAJOR DISCUSSION POINT
Innovation‑First Approach and Sector‑Specific Governance
AGREED WITH
Rafik Rikorian
DISAGREED WITH
Rafik Rikorian
Agreements
Agreement Points
Coalition building is the most pragmatic path for AI governance given current geopolitical tensions
Speakers: Bella Wilkinson, Sabina Chofu, Halak Shirastava
Global consensus on AI governance is unrealistic; focus on partial alignment via issue‑specific coalitions Coalition building is the most pragmatic path given current geopolitical tensions Capacity building must include procurement policy frameworks and open‑source adoption, not just workshops
All three speakers agree that a full global consensus on AI governance is unlikely and that forming issue-specific or industry coalitions, supported by appropriate procurement policies, is the realistic way forward [26-28][36-39][48-50][191-196].
POLICY CONTEXT (KNOWLEDGE BASE)
This view echoes calls for coalition-building around specific AI issues as a pragmatic alternative to universal treaties, highlighted in discussions on geopolitical constraints and the need for flexible cooperation [S51][S53][S60].
Open standards and shared open‑source infrastructure can enable global collaboration while preserving national sovereignty
Speakers: Rafik Rikorian, Halak Shirastava
An open‑source stack (Linux analogy) can provide shared AI infrastructure while preserving national sovereignty Developing open standards and interfaces enables global collaboration and digital sovereignty Evolving technical standards (NIST, ISO) provide flexible, international frameworks that can adapt to rapid AI advances
Both speakers advocate for open, interoperable standards and open-source stacks as a foundation for collaborative AI development that respects sovereignty [70-78][91-96][102-106].
POLICY CONTEXT (KNOWLEDGE BASE)
Standard-developing organisations argue that open technical standards bridge technology and policy, underpinning regulatory frameworks while allowing nations to retain control, a stance reflected in multiple policy briefs on AI standards and sovereignty [S43][S44][S46][S54].
Capacity building through shared evidence, benchmarks and talent development is essential for emerging economies
Speakers: Halak Shirastava, Rajesh Nambia
Sharing evidence, benchmarks, and best‑practice documents builds technical capacity across borders Developing talent for both AI innovation and governance is essential for effective sectoral oversight
Both emphasize that providing shared documentation and developing skilled personnel are key to raising AI capacity in less-resourced countries [188-191][214-219].
POLICY CONTEXT (KNOWLEDGE BASE)
Capacity-building is repeatedly identified as critical for emerging economies, featuring in IGF forums on cybersecurity, African innovation-regulation balance, and AI upskilling initiatives [S55][S48][S60].
Public‑private compute consortia and data‑trust marketplaces can pool resources to broaden AI access
Speakers: Rajesh Nambia, Rafik Rikorian
Public‑private compute consortia and cloud‑credit programs can pool resources and give broader access to AI compute Data trusts offer ethically sourced, provenance‑tracked datasets for shared use and fair compensation
Both propose collaborative models-compute sharing consortia and data-trust marketplaces-to lower barriers for AI development in developing regions [130-133][160-166].
POLICY CONTEXT (KNOWLEDGE BASE)
Reports on compute scarcity in low-resource settings and public-sector compute programmes illustrate how public-private consortia and data-trust models can expand AI access [S59][S66][S65][S67].
Global AI governance consensus is unrealistic in the current geopolitical climate
Speakers: Bella Wilkinson, Sabina Chofu
Global consensus on AI governance is unrealistic; focus on partial alignment via issue‑specific coalitions Coalition building is the most pragmatic path given current geopolitical tensions
Both speakers concur that achieving worldwide AI governance agreement is a ‘no-go’, and that partial, issue-focused alignment is the viable alternative [26-28][48-50].
POLICY CONTEXT (KNOWLEDGE BASE)
Analysts note that current geopolitical tensions make a universal AI governance treaty unrealistic, advocating instead for issue-specific coalitions and acknowledging disagreement over a global governance structure [S51][S53][S57][S64].
Similar Viewpoints
Both see coalition building around specific issues as the realistic way to advance AI governance amid geopolitical rivalry [26-28][48-50].
Speakers: Bella Wilkinson, Sabina Chofu
Global consensus on AI governance is unrealistic; focus on partial alignment via issue‑specific coalitions Coalition building is the most pragmatic path given current geopolitical tensions
Both argue that open, interoperable standards and open‑source foundations are essential for collaborative, sovereign‑respecting AI development [70-78][91-96][102-106].
Speakers: Rafik Rikorian, Halak Shirastava
An open‑source stack (Linux analogy) can provide shared AI infrastructure while preserving national sovereignty Developing open standards and interfaces enables global collaboration and digital sovereignty Evolving technical standards (NIST, ISO) provide flexible, international frameworks that can adapt to rapid AI advances
Both stress that capacity building must combine shared technical evidence with development of skilled personnel to enable effective AI use and regulation in emerging economies [188-191][214-219].
Speakers: Halak Shirastava, Rajesh Nambia
Sharing evidence, benchmarks, and best‑practice documents builds technical capacity across borders Developing talent for both AI innovation and governance is essential for effective sectoral oversight
Both propose collaborative resource‑sharing mechanisms—whether compute or data—to lower entry barriers for AI development in less‑resourced settings [130-133][160-166].
Speakers: Rajesh Nambia, Rafik Rikorian
Public‑private compute consortia and cloud‑credit programs can pool resources and give broader access to AI compute Data trusts offer ethically sourced, provenance‑tracked datasets for shared use and fair compensation
Both recognize that, given the impossibility of universal consensus, open‑source models and issue‑specific coalitions can provide practical pathways for cooperation [26-28][70-78].
Speakers: Bella Wilkinson, Rafik Rikorian
Global consensus on AI governance is unrealistic; focus on partial alignment via issue‑specific coalitions An open‑source stack (Linux analogy) can provide shared AI infrastructure while preserving national sovereignty
Unexpected Consensus
Open standards as a trust‑building mechanism for AI governance
Speakers: Rafik Rikorian, Halak Shirastava
Developing open standards and interfaces enables global collaboration and digital sovereignty Evolving technical standards (NIST, ISO) provide flexible, international frameworks that can adapt to rapid AI advances
It is noteworthy that an open-source technologist and a policy lead converge on the importance of open, evolving standards-Rafik from a Linux-style ecosystem perspective and Halak from a standards-body policy perspective-highlighting a cross-disciplinary agreement on standards as a cornerstone for trustworthy AI collaboration [91-96][102-106].
POLICY CONTEXT (KNOWLEDGE BASE)
Trust in AI systems is linked to transparent, auditable standards; multiple sources cite open standards as a key trust-building tool within AI governance frameworks [S43][S45][S46][S63].
Overall Assessment

The panel shows a clear convergence on three pillars: (1) coalition building and issue‑specific alignment as the pragmatic route for AI governance; (2) the adoption of open, interoperable standards and open‑source infrastructure to preserve sovereignty while enabling collaboration; (3) capacity building through shared evidence, benchmarks and talent development, complemented by public‑private resource‑sharing mechanisms. While there is agreement that a universal global consensus is unattainable, participants differ on the balance between innovation‑first approaches and regulatory frameworks.

Moderate to high consensus on practical cooperation mechanisms (coalitions, open standards, capacity building) but low consensus on the feasibility of a single global governance regime, implying that future policy work should focus on building issue‑specific coalitions, open‑source ecosystems, and shared capacity‑building initiatives.

Differences
Different Viewpoints
Sequencing of innovation and regulation for emerging economies
Speakers: Rajesh Nambia, Halak Shirastava
Emerging economies should prioritize innovation and pilot projects before imposing heavy regulation Capacity building must include procurement policy frameworks and evolving technical standards (NIST, ISO) to avoid costly country‑by‑country compliance
Rajesh argues that countries need an “innovation-first mindset” and should lead with innovation before regulation, suggesting regulation can stifle growth [213-215]. Halak counters that effective capacity building requires early adoption of flexible technical standards and procurement policies to open markets and avoid expensive compliance, implying that regulatory frameworks are essential from the start [102-106][191-196].
POLICY CONTEXT (KNOWLEDGE BASE)
African policy discussions stress the need to balance rapid AI innovation with proportionate regulation, highlighting sequencing challenges for emerging economies [S48][S64].
Preferred mechanism for shared AI infrastructure and governance
Speakers: Rafik Rikorian, Halak Shirastava
An open‑source stack (Linux analogy) can provide shared AI infrastructure while preserving national sovereignty Evolving technical standards (NIST, ISO) provide flexible, international frameworks that can adapt to rapid AI advances
Rafik promotes an open-source model, likening AI to the Linux ecosystem where a common code base is collaboratively built and each nation fine-tunes its own version, also describing Mozilla’s data collaborative as a marketplace for ethically sourced data [70-78][160-166]. Halak emphasizes the role of formal, evolving technical standards such as NIST and ISO to give startups a flexible compliance path and to avoid costly country-by-country regulation [102-106]. The two propose different primary enablers – open-source community versus standards bodies.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on the optimal mechanism point to open-source models, public compute platforms, and data-trust marketplaces as leading proposals for shared AI infrastructure [S59][S66][S67].
Primary barrier to AI adoption in developing nations: compute access vs. coalition building
Speakers: Rajesh Nambia, Bella Wilkinson
Severe compute access gap hampers AI development in smaller and developing economies Global consensus on AI governance is unrealistic; focus on partial alignment via issue‑specific coalitions
Rajesh highlights limited access to high-performance compute and the high expense of GPU clusters as a major obstacle, calling for public-private consortia and cloud-credit programs to pool resources [57-60][130-133]. Bella argues that a global consensus is a “no-go” and that progress should come from building issue-specific coalitions that can later be scaled, placing less emphasis on compute provision and more on governance mechanisms [26-29][38-44].
POLICY CONTEXT (KNOWLEDGE BASE)
Studies identify limited compute capacity as a primary barrier, while coalition-building approaches are promoted to mitigate resource gaps, underscoring the tension between these factors [S59][S65][S66][S60].
Unexpected Differences
Speed of transparency and accountability mechanisms
Speakers: Audience, Sabina Chofu
Current transparency and accountability processes are too slow, demanding faster mechanisms Sabina’s response dismisses the comment and introduces unrelated references, showing a lack of engagement with the concern
The audience points out that it takes decades for investigative files to be released, calling for a faster system [135-138]. Sabina replies with unrelated remarks about “Aaron Mulder” and does not address the speed issue, indicating an unexpected disconnect between audience expectations and panel response [140-144].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy frameworks such as the OECD AI Principles call for timely transparency and accountability, but stakeholders note divergent views on implementation speed and enforcement [S45][S64][S58].
Overall Assessment

The panel largely concurs that a universal AI governance consensus is unattainable and that coalition‑building is essential. However, substantive disagreements emerge around the sequencing of innovation versus regulatory frameworks, the preferred technical mechanism for shared AI infrastructure (open‑source versus formal standards), and the primary barrier to AI adoption (compute access versus coalition‑driven governance). An unexpected tension appears between audience expectations for rapid transparency and the panel’s limited engagement with that demand.

Moderate to high: while there is consensus on the need for cooperation, the differing views on how to operationalize capacity building, infrastructure sharing, and regulatory sequencing could impede coordinated action, especially for emerging economies seeking concrete pathways.

Partial Agreements
Both agree that full global AI governance is unattainable at present and that building coalitions around specific issues is the most realistic way forward, even though they do not dispute the need for such coalitions [26-29][48-50].
Speakers: Bella Wilkinson, Sabina Chofu
Global consensus on AI governance is unrealistic; focus on partial alignment via issue‑specific coalitions Coalition building is the most pragmatic path given current geopolitical tensions
Both aim to increase AI capacity in developing regions through shared resources, but Rajesh focuses on institutional consortia and cloud credits, whereas Rafik emphasizes open‑source software stacks and data trusts as the sharing mechanism [130-133][70-78].
Speakers: Rajesh Nambia, Rafik Rikorian
Public‑private compute consortia and cloud‑credit programs can pool resources and give broader access to AI compute An open‑source stack (Linux analogy) can provide shared AI infrastructure while preserving national sovereignty
Takeaways
Key takeaways
Global consensus on AI governance is unrealistic in the current geopolitical climate; focus should shift to issue‑specific coalitions and partial alignment. Multilateral institutions can act as brokers, but trusted mechanisms and coalition‑building are needed to bring rival states together. Developing nations face a severe AI divide driven by limited compute access, data silos, poor data quality, and inadequate infrastructure (power, connectivity). Open‑source models and shared technical standards (e.g., Linux analogy) can provide common infrastructure while preserving national sovereignty. Examples such as the Southeast Asian Languages Under One Network illustrate how open‑source LLMs can be locally fine‑tuned for language and cultural relevance. Data trusts and federated‑learning architectures are promising models for cross‑border collaboration that respect data provenance and privacy. Public‑private compute consortia, cloud‑credit programs, and shared GPU resources can help pool scarce compute capacity. Capacity building must go beyond workshops to include sharing of evidence, benchmarks, procurement‑policy frameworks, and open‑source adoption guidance. Emerging economies should adopt an innovation‑first approach and develop sector‑specific governance (healthcare, finance, etc.) rather than relying solely on horizontal regulation. Evolving technical standards bodies (NIST, ISO, ITU) offer flexible, international frameworks that can adapt to rapid AI advances.
Resolutions and action items
Proposal to form issue‑specific coalitions (e.g., verification, hardware risk mitigation, anonymised usage data) that can later be scaled through multilateral formats. Suggestion to create or expand public‑private compute consortia and cloud‑credit schemes to provide shared GPU resources for developing countries. Call for the development of open standards and open interfaces for AI models to enable a LAMP‑like stack for AI. Recommendation to establish data‑trust marketplaces (e.g., Mozilla Data Collaborative) that ensure provenance, licensing, and fair compensation for data contributors. Encouragement to adopt federated‑learning approaches for cross‑border model training without exposing raw data. Action item to share evidence, performance benchmarks, and best‑practice documentation internationally to build technical capacity in emerging economies. Suggestion to coordinate procurement‑policy networks across countries to streamline acquisition of open‑source AI solutions.
Unresolved issues
How to create trusted, neutral mechanisms that can reliably bring rival states (e.g., US, China) into the same governance discussions. Funding models and governance structures for large‑scale compute pooling and cloud‑credit distribution. Specific pathways for scaling open‑source AI models while ensuring they meet diverse regulatory and cultural requirements. Details of implementing federated‑learning frameworks across jurisdictions with differing data‑privacy laws. Concrete steps for building and retaining AI talent (both technical and governance) in smaller economies. How sector‑specific governance frameworks will be coordinated internationally to avoid fragmentation. Mechanisms for aligning and updating technical standards (NIST, ISO) in a timely manner as AI capabilities evolve.
Suggested compromises
Partial alignment on priority issue areas rather than full global consensus. Coalition building around trusted, limited‑scope mechanisms that can later be scaled via multilateral institutions. Adopting open standards that allow shared core infrastructure while permitting national fine‑tuning for sovereignty. Pooling compute resources and cloud credits while allowing individual countries to retain control over their own workloads. Balancing open‑source contributions with local adaptation to meet cultural and regulatory needs.
Thought Provoking Comments
Global consensus on how to govern AI is a no‑go. However, partial alignment on priority issue areas is possible, and we should focus on building coalitions that can later be scaled through multilateral formats.
She challenges the optimistic narrative of universal AI governance, reframing the problem from seeking impossible global consensus to pragmatic coalition‑building, which sets a realistic tone for the discussion.
Her comment shifted the conversation from abstract geopolitics to concrete, actionable steps. It prompted Sabina to acknowledge coalition‑building as the best hope, and opened space for other panelists to propose specific mechanisms (e.g., open‑source models, compute consortia).
Speaker: Bella Wilkinson
The AI divide will be much bigger than the digital divide because it is about agency, not just access. Compute, data quality, power, connectivity and skills are layered barriers that disproportionately disadvantage smaller economies.
He expands the discussion from high‑level governance to the concrete, multi‑dimensional infrastructure gaps that developing countries face, highlighting why mere access to the internet is insufficient for AI participation.
His detailed enumeration of barriers deepened the analysis and gave the panel a concrete problem set to address. It led to follow‑up suggestions from Rafik about shared infrastructure and from Halak about standards and shared practices.
Speaker: Rajesh Nambia
Every computer runs Linux; the Linux model shows how a common code base can be contributed to by anyone while allowing sovereign fine‑tuning. We need an equivalent ‘LAMP‑stack’ for AI – open standards and interfaces that let each country build on a shared core.
He introduces a powerful analogy from open‑source software to AI governance, proposing a concrete architectural vision for collaborative yet sovereign AI development.
This analogy sparked a thematic thread on open‑source and modularity that recurred throughout the panel. It inspired Bella to cite the Southeast Asian multilingual LLM example and prompted Rafik later to discuss data trusts and federated learning as practical implementations.
Speaker: Rafik Rikorian
Technical standards (e.g., NIST, ISO) are flexible, evolving, and can prevent smaller companies from being priced out. International, evolving standards combined with industry coalitions can enable shared risk‑mitigation practices.
She identifies a tangible lever—standardisation—that can bridge the gap between diverse regulatory regimes and foster inclusive participation, moving the conversation from abstract governance to actionable policy tools.
Her focus on standards gave the panel a concrete area of convergence, leading Sabina to link it back to Bella’s coalition idea and prompting further discussion on interoperability and shared resources.
Speaker: Halak Shirastava
Mozilla’s Data Collaborative aims to create a marketplace of ethically sourced, provenance‑tracked data sets, giving data owners (e.g., radio stations) compensation and control, while providing clean data for model training.
He presents a concrete, innovative model for data sharing that addresses both ethical concerns and the data scarcity faced by developing regions, illustrating how open‑source principles can be operationalised.
This example grounded the earlier abstract talk of data trusts, leading Bella to reference it when discussing multilingual LLMs and prompting further interest in federated learning as a complementary approach.
Speaker: Rafik Rikorian
Capacity building isn’t just workshops; it requires shared evidence, procurement policy coalitions, and open‑source adoption to avoid billions of dollars wasted on proprietary models.
She expands the notion of capacity building beyond training, highlighting systemic levers (evidence sharing, procurement) that can accelerate adoption in emerging economies.
Her points redirected the conversation toward practical mechanisms for scaling AI in low‑resource settings, reinforcing Rajesh’s earlier emphasis on innovation‑first approaches and influencing the final round of discussion about talent and sector‑specific governance.
Speaker: Halak Shirastava
Countries should lead with an innovation‑first mindset; sector‑specific (healthcare, finance) governance is more meaningful than blanket horizontal rules, and we need talent that understands both technology and sectoral harms.
He challenges the typical regulatory‑first narrative, arguing for a nuanced, sector‑focused approach that balances innovation with safety, adding depth to the policy discussion.
This comment prompted Sabina to acknowledge the need for sector‑specific solutions and led to a brief but pointed exchange on talent gaps, reinforcing the panel’s consensus on the importance of building local expertise.
Speaker: Rajesh Nambia
Overall Assessment

The discussion pivoted from an initial, high‑level framing of AI governance to a grounded, solution‑oriented dialogue thanks to a handful of incisive remarks. Bella’s realistic appraisal of global consensus set a pragmatic baseline, while Rajesh’s exposition of the multi‑layered AI divide supplied the concrete challenges that needed addressing. Rafik’s open‑source analogies and data‑collaborative proposal, together with Halak’s focus on evolving technical standards and systemic capacity‑building, supplied actionable pathways for coalition‑building and shared infrastructure. Subsequent comments on innovation‑first, sectoral governance, and talent development deepened the conversation, steering it toward implementable policies for emerging economies. Collectively, these key comments reshaped the tone from speculative to constructive, aligning the panel around tangible mechanisms—open standards, data trusts, federated learning, and procurement coalitions—to bridge the AI divide.

Follow-up Questions
What messaging can drive coalition building in AI governance in the absence of trusted institutions and shared values?
Identifying effective communication strategies is crucial to foster trust and cooperation among competing nations and stakeholders.
Speaker: Bella Wilkinson
How can low‑hanging governance alignment (e.g., shared data governance, pooled compute) be operationalised for resource‑constrained countries?
Practical steps are needed for developing nations to benefit from coalition‑building without excessive cost or complexity.
Speaker: Bella Wilkinson
What concrete examples of shared standards, pooled resources, or public‑private models exist that could be replicated for smaller or developing economies?
Real‑world models would guide policy makers and practitioners in implementing collaborative AI initiatives.
Speaker: Rajesh Nambia
How can an open‑source, LAMP‑style stack be translated into AI to provide digital sovereignty, interoperability, and flexibility for nations?
Open‑source approaches could democratise AI infrastructure, allowing countries to customize while contributing to a common core.
Speaker: Rafik Rikorian
How can technical standards such as NIST and ISO be aligned across jurisdictions to reduce compliance costs for startups and smaller firms?
Harmonised standards would lower barriers to market entry and promote equitable participation in AI development.
Speaker: Halak Shirastava
How can shared risk‑mitigation practices (e.g., misuse evaluations, red‑team reports) be coordinated internationally?
Collective safety assessments can improve trust, reduce duplication of effort, and enhance global AI security.
Speaker: Halak Shirastava
How can interoperability of shared resources (datasets, benchmarks, evaluation tools) be achieved across large tech companies and startups?
Interoperability enables broader participation, fair competition, and faster progress in AI research and deployment.
Speaker: Halak Shirastava
How can federated learning architectures be leveraged for cross‑border collaboration while preserving data sovereignty?
Federated learning allows joint model training without moving raw data, addressing privacy and sovereignty concerns.
Speaker: Rafik Rikorian
What models of data trusts (e.g., indigenous data collectives, Mozilla Data Collaborative) can be scaled globally for ethical data sharing and compensation?
Data trusts provide provenance, licensing, and monetisation mechanisms essential for fair and responsible AI data use.
Speaker: Rafik Rikorian
What procurement policy frameworks could be established through an industry coalition to open markets for emerging economies?
Standardised procurement rules can facilitate adoption of open‑source AI solutions and stimulate local ecosystems.
Speaker: Halak Shirastava
How should capacity‑building be structured beyond workshops—e.g., through shared evidence, documentation, benchmarks—to effectively uplift emerging economies?
Tangible resources and shared knowledge are needed for sustainable capacity development in AI governance.
Speaker: Halak Shirastava
How can sector‑specific governance (healthcare, finance, climate, etc.) be developed to address distinct harms and regulatory needs?
Sectoral approaches may be more effective than generic rules, ensuring relevant safeguards for each domain.
Speaker: Rajesh Nambia
What strategies can address talent gaps in AI governance within governments of developing countries?
Building skilled personnel is essential for implementing and regulating AI responsibly at the national level.
Speaker: Rajesh Nambia
What are the implications and lessons from the Southeast Asian Languages Under One Network multilingual LLM model for collaborative AI development and governance?
The model illustrates how open‑source fine‑tuning and cross‑border collaboration can produce culturally relevant AI services.
Speaker: Bella Wilkinson (addressed to Rafik Rikorian)
How can cross‑border cooperation be facilitated given institutional capacity constraints in developing nations?
Institutional capacity is a bottleneck; identifying mechanisms to strengthen it is key for effective AI adoption.
Speaker: Sabina Chofu (to Bella)
What developments in AI governance standards and bodies (ITU, ISO, etc.) are expected over the next 12‑18 months?
Anticipating near‑term progress helps stakeholders plan actions and align with emerging frameworks.
Speaker: Halak Shirastava
Is the current pace of transparency and accountability (e.g., 30‑year lag for certain files) acceptable, or are we resigned to systemic delays?
Raises concern about the speed of governance processes and the need for more timely accountability mechanisms.
Speaker: Audience

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.