Smart Regulation Rightsizing Governance for the AI Revolution

Smart Regulation Rightsizing Governance for the AI Revolution

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by framing the summit’s focus on “governance for an AI-driven world” and the need to give all nations access to AI resources through shared compute and data initiatives [1-5][10-14][15].


Bella Wilkinson argued that a universal AI-governance consensus is unattainable in the current geopolitical climate, but partial alignment on priority issues can be achieved by building coalitions that emphasize sovereignty and strategic autonomy, especially for resource-constrained countries that might pool compute resources [26-28][38-44][43].


Rajesh Nambia highlighted that the emerging “AI divide” will far exceed the previous digital divide, pointing to limited access to high-performance compute, high costs, fragmented and low-quality data, and broader infrastructure gaps such as power and connectivity [56-60][61-66]; he suggested public-private compute consortia, shared GPU clusters and cloud-credit schemes as practical ways for developing economies to participate [132-133].


Rafik Rikorian proposed an open-source model as a template for AI collaboration, likening the universal Linux code base and the LAMP stack to a shared infrastructure that can be locally fine-tuned while preserving digital sovereignty; he called for open standards and interfaces to prevent a handful of frontier-model firms from monopolising AI governance [68-78][84-89][90-96].


Halak Shirastava reinforced the promise of technical standards (e.g., NIST, ISO) and shared risk-mitigation practices, stressing the importance of shared evidence, coordinated procurement policies and interoperability of resources to build capacity in emerging economies; he expressed optimism that increased stakeholder participation will drive measurable progress within the next year [102-108][110-115][188-196][218-224].


Overall, the discussion converged on the view that while global AI-governance consensus is unlikely, targeted coalitions, open-source-inspired frameworks, and shared standards can enable meaningful cooperation and capacity-building for smaller and developing nations.


Keypoints


Major discussion points


Global AI governance is unlikely to achieve full consensus, but targeted coalitions and partial alignment are feasible.


Bella notes that “global consensus on how to govern AI is a no-go” in the current geopolitical climate, yet “partial alignment on priority issue areas is possible” and can be built through smaller coalitions that later scale via multilateral formats [26-29][36-40][42-44].


Developing and smaller economies face a multi-layered “AI divide” that goes beyond the traditional digital gap.


Rajesh highlights three core barriers: limited and expensive compute resources; fragmented, low-quality data silos; and foundational infrastructure deficits such as power and connectivity, all of which compound talent shortages [57-63][68-71][73-76].


Open-source models and shared software infrastructure can provide a pathway to digital sovereignty and collaborative AI development.


Rafik draws an analogy to the Linux/LAMP stack, arguing that a common open-source core with locally-fine-tuned layers would let every nation retain sovereignty while contributing to a shared ecosystem [68-78][80-88][90-96].


Technical standards, shared risk-mitigation practices, and interoperability are key levers for scaling governance and enabling smaller players.


Halak points to evolving frameworks such as NIST and ISO, the need for shared evaluation documents, and the importance of interoperable resources (e.g., red-team reports, multilingual benchmarks) to avoid “price-out” effects for startups [102-108][110-115][118-124].


Capacity-building must go beyond workshops to include shared evidence, procurement policy coalitions, and sector-specific governance mechanisms.


Both Halak and Rajesh stress that emerging economies need concrete tools: shared performance benchmarks, cross-border procurement networks, and sector-focused regulatory approaches (e.g., health-care vs. finance) to develop the talent and policies required for responsible AI [184-191][192-199][213-215][219-224].


Overall purpose / goal of the discussion


The panel was convened to explore how the international community can “up-level the playing field” for smaller and developing nations by sharing compute, data, and governance resources, and by identifying practical mechanisms-coalitions, open-source models, standards, and capacity-building-that can foster equitable AI development across sectors such as health, education, and climate resilience [2-5][18-21].


Overall tone and its evolution


– The conversation opens with a pragmatic, somewhat pessimistic tone about the feasibility of worldwide AI governance consensus [26-28].


– It quickly shifts to constructive optimism, emphasizing coalition-building, open-source collaboration, and concrete standards as achievable pathways [40-44][68-78][102-108].


– By the latter half, the tone becomes forward-looking and hopeful, with speakers highlighting imminent progress in standards, capacity-building, and sector-specific governance over the next 12-18 months [211-224][218-224].


Thus, the discussion moves from acknowledging geopolitical constraints to outlining actionable, collaborative solutions that inspire confidence in the near-term future.


Speakers

Sabina Chofu


Areas of expertise: International AI policy, governance, multilateral cooperation


Role/Title: International Policy and Strategy Lead at TechUK (sister association of NASCOM in the UK)


Affiliation: TechUK


Bella Wilkinson


Areas of expertise: Digital society, AI governance, coalition building


Role/Title: Research Fellow, Digital Society Program


Affiliation: Chatham House


Rafik Rikorian


Areas of expertise: Open-source technology, shared AI infrastructure, standards


Role/Title: Chief Technology Officer


Affiliation: Mozilla


Rajesh Nambia


Areas of expertise: AI adoption in emerging economies, compute & data infrastructure, public-private partnerships


Role/Title: President


Affiliation: NASCOM (National Association of Software and Service Companies, India) [S1]


Halak Shirastava


Areas of expertise: Global AI policy, technical standards, interoperability, capacity building


Role/Title: Global Public Policy Lead (AI)


Affiliation: Cohere (Canadian AI developer) [S2]


Audience


Areas of expertise:


Role/Title: Audience member(s)


Affiliation:


Additional speakers:


Navreena Singh – Mentioned as absent; affiliated with Credo AI.


Full session reportComprehensive analysis and detailed insights

The session opened with Sabina Chofu, International Policy and Strategy Lead at TechUK, who noted that Navreena Singh could not attend because of a meeting with the president and positioned the summit under the theme “governance for an AI-driven world” [2-4][9-11]. She also reminded the audience that TechUK is the sister association of NASCOM in the UK [15-17].


Bella Wilkinson, research fellow on the Digital Society Programme at Chatham House, set a realistic tone by stating that a universal AI-governance consensus is currently a “no-go” in the geopolitical climate [26-28]. She argued that, while full alignment is unattainable, partial alignment on priority issues can be achieved through issue-specific coalitions that may later scale via multilateral formats [12-15]. Wilkinson highlighted the accelerating US-China AI race, the opacity of frontier models, and the erosion of trust in international institutions, and suggested that coalition-building should be framed around “sovereignty and strategic autonomy” for resource-constrained countries [34-37][38-41].


Rajesh Nambia, President of NASCOM India, described the emerging “AI divide” as larger than the earlier digital divide because it concerns both agency and access [56-60]. He identified three inter-linked barriers for emerging economies: (1) severe scarcity and high cost of high-performance compute, even after adjusting for purchasing-power parity [57-60]; (2) fragmented, low-quality data silos across government departments that impede the creation of representative models [61-66]; and (3) foundational infrastructure gaps-including unreliable power, limited clean energy, and insufficient connectivity-that further hinder AI deployment [68-71][73-76]. Nambia cited public-private compute consortia, shared GPU clusters such as India’s AI Mission compute cluster, and cloud-credit schemes from hyperscalers as ways to provide resources without each country having to build a frontier model from scratch [130-133]. He also warned that talent gaps in both AI development and regulatory expertise threaten effective governance [213-215].


Rafik Rikorian, Chief Technology Officer of Mozilla, drew a parallel with the Linux ecosystem, noting that “every computer on the planet runs Linux” and that this model allows anyone to contribute to a common code base while retaining the freedom to fine-tune their own implementations [70-78]. He extended the analogy to the early web, illustrating how the shift to the LAMP stack introduced openness that allowed anyone to build services without needing permission [80-86][87-96]. Applying this to AI, Rikorian described Mozilla’s “Data Collaborative”, a marketplace for ethically sourced, provenance-tracked datasets that compensates data owners (e.g., radio stations) and supplies clean data for model training [157-166]. He also referenced an indigenous data-trust model for Hawaiian genomic data and advocated federated-learning architectures, where model training occurs on local devices and only model weights are shared, preserving data sovereignty while enabling cross-border collaboration on health, language, or other sector-specific models [167-176].


Halak Shirastava, Global AI and Public Policy Lead at Cohere, emphasized the role of evolving technical standards such as NIST and ISO, describing them as “flexible and evolving” frameworks that can avoid “price-out” effects for startups [102-108]. She highlighted shared risk-mitigation practices-joint misuse evaluations, red-team reports, and interoperable multilingual benchmarks-as essential for scaling governance across large tech firms and smaller players [110-115][118-124]. Shirastava then outlined a three-step capacity-building framework: (a) sharing documented evidence and performance benchmarks; (b) establishing coordinated procurement-policy networks to avoid costly country-by-country compliance; and (c) promoting open-source adoption to prevent billions of dollars of waste on proprietary solutions [183-191][188-196].


An audience member raised a comment about the “30 years for FC files” and noted a lingering concern about the speed of systemic reforms; Sabina responded with a confused acknowledgement that the point had not been directly addressed [135-138][140-144].


Returning to coalition-building, Bella highlighted the “Southeast Asian Languages Under One Network”, a multilingual LLM that combines open-source model inputs with local fine-tuning, illustrating how open-source assets can be adapted to regional contexts while supporting robust national institutions and cross-border cooperation [151-155]. Rikorian expanded on this by reiterating the potential of the Mozilla Data Collaborative and federated-learning architectures, and Shirastava reinforced the importance of the three-step capacity-building framework. Rajesh concluded by urging an “innovation-first” mindset, recommending pilot projects and sector-specific governance (e.g., health-care versus finance) before imposing heavy regulation [213-215].


In closing, Sabina summarized the panel’s consensus: (i) targeted, issue-specific coalitions are the most pragmatic route to partial governance alignment; (ii) open-source-inspired infrastructures and open standards can provide shared foundations while preserving national sovereignty; (iii) technical standards (NIST, ISO) and shared risk-mitigation practices are vital for inclusive participation; and (iv) capacity-building must move beyond ad-hoc workshops to systematic sharing of evidence, benchmarks, and procurement frameworks [26-29][36-40][68-78][102-108][184-191]. Shirastava projected that increased stakeholder participation over the next year will accelerate standards development, raise AI literacy across public and private sectors, and deliver concrete capacity-building outcomes [218-224]. Rikorian echoed this optimism, noting that federated-learning and data-trust models are already maturing and could be deployed at scale within the coming months [176].


Notable disagreements were recorded. Nambia emphasized compute access as the primary barrier and advocated an innovation-first approach, whereas Wilkinson placed greater weight on coalition-driven governance mechanisms rather than direct compute provision [57-60][26-29][38-44]. Rikorian’s vision of an open-source stack contrasted with Shirastava’s focus on formal standards bodies, reflecting a tension between community-driven and standards-driven pathways [70-78][102-108]. Finally, Nambia’s “innovation-first” stance conflicted with Shirastava’s claim that early adoption of flexible standards and coordinated procurement policies is essential to avoid costly regulatory fragmentation [213-215][102-108].


Overall, the panel agreed that while a single global AI-governance regime is unlikely, the combination of targeted coalitions, open-source-style shared infrastructures, evolving technical standards, and robust capacity-building programmes offers a viable roadmap for narrowing the AI divide and empowering smaller and developing nations to participate meaningfully in an AI-driven future.


Session transcriptComplete transcript of the session
Sabina Chofu

about this morning is right -signing governance for an AI -driven world. So what we’ll try to do with a pretty excellent panel, as I’m sure you’ll agree, is talk a bit about shared computes and data initiatives that hopefully give all nations access to AI resources. We’ll look a bit at how to up -level the playing field for smaller and developing nations. And we’ll talk about collaboration in key sectors like healthcare and education and climate resilience. I’ve got a perfect panel to do that with. I’m going to introduce them all first, and then we’ll dive straight into the conversation. So unfortunately, Navreena Singh from Credo AI couldn’t be with us this morning. She’s got a meeting with the president, so she’s excused.

But we do have… What I start with, just next to me here, Bella Wilkinson, who’s a research fellow on the Digital Society Program. with the Chatham House. Next to her is Rafik Rikorian. I hope I’ve pronounced that vaguely okay, who is the Chief Technology Officer for Mozilla. Next to him, we’ve got Rajesh Nambia, who is the President of NASCOM, our sister association here in India. And last but not least, we’ve got Halak Shirastava, managed, who’s Global AI and Public Policy and Regulatory Affairs at Cohere. And for those of you who don’t know me, I’m Sabina Chofu, I’m International Policy and Strategy Lead at TechUK. So we are the sister association of NASCOM back in the UK.

So without further ado, we will start with setting a bit of a global context, and who better to do that than Isabella. So from a kind of geopolitical perspective, how realistic, I guess, is alignment on AI governance across countries with… fair to say very different strategic interests right now. And where do you see maybe multilateral institutions? I know multilateralism is not a very popular theme these days, but where do you see multilateral institutions or maybe other international players playing a role in this space? So over to you.

Bella Wilkinson

Thank you, Sabina. Thanks to my fellow speakers. It’s great to be here today, really keeping the energy up on the final day of the summit. We can all do it. Let me answer your question directly and then perhaps elaborate a little bit more in detail. Global consensus on how to govern AI is a no -go. It is not going to happen in this geopolitical environment. However, partial alignment on priority issue areas is possible, and it’s pragmatic to throw our weight behind these smaller gatherings that we can then scale using the multilateral format. Now, let’s take a second. Let’s take a second to sketch out the state of play. We have some great experts in the room, on the panel, so I won’t spend too long doing this.

We have been absolutely covered in really optimistic summit rhetoric, walking into Bharat Mandapam, going to side events over the course of this week. But despite the optimism outside of these walls in the background, the US -China AI race continues to accelerate to the umpteenth degree. The capabilities of advanced and the most frontier AI systems and models, the little we know about their capabilities, mind, with huge gaps in transparency, continue to advance. And global scientists only recently have issued warnings about the state of the science and the intense uncertainty surrounding these capabilities and the impact they might be having on our communities and societies. well it’s a good thing we have strong international institutions and shared values we don’t you know it’s a really difficult time for global cooperation outside of ai you know we’re seeing i would argue since the second world war an unprecedented degradation of the international organizations the shared values the rule of law that we have all held so dearly so suffice to say it’s a difficult time for global governance it’s difficult time for the global governance of ai now institutions in the past have very much been brokers mediators and scalers of consensus on tricky governance issues and some of the governance problems we’re facing today are pretty old right i mean i’ve encountered them in previous roles at chatham house and other areas of tech i’m sure the experts on our panel have come across them and the core governance puzzle that we need to figure out is this taking into account the state of geopolitics, the uncertainty around the state of the science, the market dynamics mediated by these leading labs and intensely, intensely competitive US and Chinese ARS dynamics, how on earth do we bring rivals and competitors around the same table?

How do we bring states with a nominal or a minimal alignment of interests and incentives into the same room? Now, you started by asking me about multilateralism and institutions, but maybe let’s reframe this and talk about coalitions. In other areas of governance, what we’ve seen is intense coalition building in crisis or unstable settings around a trusted mechanism, a trusted approach, perhaps in the absence of shared values and principles. And what I’m really interested in, in the context of AI, is where coalition building can develop trust around a credible governance approach, adopt a state champion, get support from associations, from builders, from leading labs themselves, and then scale it using the multilateral format. And over the past few days, I’ve been really excited by some of this splintering to scale dynamics that I’ve seen maybe in conversations on verification, on -chip hardware, risk mitigation strategies, even anonymized collection of usage data, which came out of the commitments yesterday.

Now, what’s the messaging that can drive this coalition building in the absence of trusted institutions, in the absence of shared values? I’ll get into this later in my remarks, but I think it has to be sovereignty and strategic autonomy. Resource -constrained countries. who might decide to adopt a common data governance approach, who might decide to pool resources like compute, have to also consider a degree of governance alignment, again, at this low -hanging fruit, in order to not only withstand the dynamics of the AI race, but to ensure that the collective benefits of cooperation and governance alignment massively outweigh anything they could do individually. So I think I’ll leave it there. Slightly pessimistic take. Let’s see if there’s some more optimism on the

Sabina Chofu

Thank you so much, Bella. I don’t think it was that pessimistic. You did kind of, I think you made it sound very pragmatic in terms of, look, the world is not what we want it to be, and there isn’t the level of multilateral cooperation that we maybe used to have. But you have talked about coalition building, and it’s probably the best we can hope for in the world as it is, as opposed to the world as we’d like it right now. And Rajesh, can I turn to you next? For emergence. economy, obviously access to compute data and infrastructure are critical, but what do you see as some of the barriers most pressing, but also maybe opportunities for AI adoption in India and beyond?

Over to you.

Rajesh Nambia

First of all, thank you for having me on the panel. Pleased to be with all of you and then a few of you showed up here as well, so thank you for coming up. We wish this was the Modi inauguration last evening, very a little bit more than this crowd, but nevertheless, we’ll do with this. But you know, I believe we used to talk about digital divide for a long period of time, and I think while that had its own puts and takes, when you compare a smaller economy and smaller country with a larger one and so on, I think the AI divide is going to be much, much bigger than the digital divide which we saw, because the biggest difference is that at least in the digital divide, the you know access and so on whereas this is all about agency and then it can completely put you at a different back foot so it is such an important topic to talk about when you talk about the broader you know haves and have nots and what really goes on with the larger smaller economies and so on and I truly believe that the accessibility when you look at the broader scales it will come across multiple things starting with compute one of the largest you know piece of what we are talking about here right I think as you mentioned in terms of the race between the US and China and so on and so forth but if you leave those two countries then of course we have a big drop in terms of where the real access is going to be and I believe totally that you know the continued limited access to the broader compute facility is going to be undue putting some of these smaller countries, especially the developing ones, into a little bit of a disadvantage.

So, I think there’s a lot that can be done around it in terms of saying, you know, what is that, you know, countries can potentially do in terms of pooling and so on. But I think there is certainly an issue when it comes to compute. And, you know, not just in terms of accessibility, but also in terms of expense and so on, because at the end of the day, all of these are, even if you use the purchasing power parity, and then sort of look at what it costs for people to sort of get into the kind of level of GPUs, potentially, or GPU clusters one has to produce to even have a meaningful language model and so on.

I think that’s going to be a very different ballgame. And the second element of this whole broader issue that we’re talking about is also the data and then the organization of data, availability of data, quality of data, and so on. I think the more you get into the development, world you will find our developing world you will find that the the data itself is very siloed in many ways there are you know different state silos different department silos and so on and it gets into a point where the data which is such an important and integral part of everything to do with AI you will end up having the data which gets fed into the broader models and eventually the AI systems will necessarily not have the right representation of that population which is a huge concern I mean even especially when you you know of course India is slightly luckier in many ways in terms of us you know playing that game a little bit you know punching a little bit above our weight in some sense but but when you when you go down on the on the list of countries which do not have access to all of these I think you’re going to find it even even harder in terms of solving the data issue and the data availability data quality all of that this becomes a bigger issue and there when we talked about infrastructure gap compute gap it’s a little bit more than just the pure compute itself gpus and so on but it’s also about connectivity power uh these are the issues which uh you know we somehow take it for granted in other segments but i think you will find that power is going to be a huge uh foundation for all of that and as you know that there are multiple layers in in building any of the ai systems and one of the uh bottom most layer is going to be power and then you know what really happens to the power and if it has to be clean power then you know does it put additional tax on the on the developing world for for making sure that that power comes out clean connectivity is a huge issue even though it’s kind of broadly solved in in some sense with all the um satellite options and so on but we continue to have the kind of connectivity you need to run a truly inclusive ai system is going to be very different from those uh you know people have thought otherwise and then of course we can go on and on in terms of the the other layers of the power and then of course we can go on and on issue, the availability of skills and ensuring that you have the right skills not just to leverage AI but also to build AI, I mean there are two different type of capabilities that you need to produce in any country so these are the issues and how do you make sure that you have a broader the opportunity itself would be to sort of look at this and say are there other ways of collaborating other ways of partnering and so on, because you know these especially when you go down the line, list of countries we have close to 200 countries or so in the world and when you leave the top 5 or 10 and then you go below and then you keep going down the list, it becomes harder I mean I don’t think that everybody is going to be producing a full blown, large language model and things that they need to sort of do it for themselves at that point in time the question will be can you really partner, can you really leverage some of the common systems that can be done across these countries and so on and so on

Sabina Chofu

Thank you. I mean, you’ve done a brilliant job of putting all the free problems we’ve got and then saying you’ve got a long list afterwards in terms of cooperation. But I love the touch of optimism there at the end. It’s like, you know, if you lift a country out of the room, you still have a hundred and whatever, 85 that need to figure it out. So I liked a lot of that framing. And thanks for touching on

Rafik Rikorian

I mean, unsurprisingly, being someone from Mozilla, I’ll probably go with the open source angle as one of the opportunities to actually align the talent, align the capabilities, and actually do shared infrastructure. I mean, maybe I’ll draw two analogies to think about, and then we can go more deep into those as it applies to AI. But for all practical purposes, every computer on the planet runs Linux. There are a few iPhones here and there on top of it. But the Linux model, I think, is a good one for all of us to think about, that every computer… Every country, every nation in the world, almost every company in the world, contributes to the single code base which has been deployed across these billions of computing devices across the planet.

And there are lots of derivative work that happens from it. So like a company like Google can then take that and make it into Android. A company like a vending machine company can deploy Linux onto a Raspberry Pi and run inside their vending machine. So I think there’s an analogy here of being able to use shared infrastructure, shared software infrastructure as a collaboration mechanism that we can all pool resources together but still have sovereignty on top of it. So we can still all be contributing to this common core but then fine -tune our way to our own particular implementations. And I think that if we take that and then marry it with a web analogy of in the early 90s of the original web, you needed to ask for permission in order to deploy a website.

And by permission I mean effectively you had to go buy yourself a Solaris box and then you had to go buy yourself you had to buy yourself a Windows NT. server, you’re trying to configure an ActiveX scenario. And the beauty of what Mozilla and Firefox did, we’re not the only ones who did it, but the beauty of what they did there is a forced openness throughout the stack that enabled anyone without permission to build whatever they wanted. And I think we need to find a similar moment. So in that world, we went from the Windows NT stack and all of IIS to the LAMP stack. And the LAMP stack has these gorgeous analogies of just like anyone can build on Linux.

When Facebook needed PHP to move faster, they did massive improvements on PHP, which then trickled down to all of us. So people can contribute in different ways across it. That’s not the world we’re currently living in AI. We’re living in this world where there are a few frontier model companies that are effectively doing governance for all of us in some way, shape, or form. And I agree with my colleague that that’s an untenable situation. I do live in San Francisco, but you don’t want four people in San Francisco. government’s decisions for the entire world that doesn’t make a lot of sense. So I do think if we can find the LAMP stack equivalent model for AI, and this is actually what I’ve turned all of Mozilla towards, of just like how do we define open standards, how do we define open interfaces so that the vibrancy of the open source community can come together and actually build solutions that work for every single person, every single community, every single government on the planet.

You can sort of build upon, you can contribute to the common base, but then build upon it and take it in a way that makes it more aligned with your country’s values or your company’s values or your individual values, and you can fine -tune your solution out of that. So I think there is an analogy here around how open source could actually provide digital sovereignty across all the different levels. Give us agency as a person, give opportunities for flexibility at a corporation level, and then give. Give countries the ability to own their version of the stack. That could actually be quite beautiful if we can actually figure out how to do that in an appropriate way.

Sabina Chofu

I tried to give you a dose of optimism you have given me a dose of optimism but I’m absolutely shocked you talked about open source thanks so much Ralphie and I did appreciate you brought up the standards because I’m going to talk to Halak and we’re going to go a bit into collaboration and standards here so obviously with the myriad of AI governance frameworks I’m going to turn to you on the question of where do you see potential for alignment on standards maybe some interoperability some maybe risk management framework so keep us on the hopeful path please

Halak Shirastava

I am here to provide the hopeful perspective let me start out by saying that I lead global public policy at Cohere. Cohere is a Canadian AI developer we build models and we have agentic AI our solution is called North so in my role I look across the global regulatory framework that means If our startup wants to, you know, do business in a certain country, I try to understand the regulatory landscape of that country, and then I advise our company if it’s favorable or not. When we’re talking about governance and frameworks that are existing, my perspective is I think it’s not there yet, but I have a more promising view of it. I think that in certain principles, we are converging to where we need to go, and there are strong opportunities.

Technical standards is one of them. You know, there are frameworks like NIST and ISO frameworks. For startups, these are key. The reason they’re key is because they’re flexible and they’re evolving. If we just go country by country, what that’s going to do is price out smaller companies. But if we have an international framework that is evolving and flexible and, you know, we’re going to be able to do that, you know, also including industry coalition, which a lot of the model developers are a part of. But also, like, other stakeholders can be a part of as well. I think it really helps. The second thing I would say is around shared practices, around risk mitigation. So I think there’s strong opportunity there as we come together and share documents or, you know, evaluations around misuse or model capabilities or impact of models.

I think, you know, like I said, we have a way to go, but we are moving closer to that. And then the third thing I would say is interoperability of shared resources. This is key, key, key. We have a big ecosystem. So, yes, there is big tech involved, but there are smaller players. And every single day there’s new startups that are wanting to emerge and wanting to have a go -to -market strategy. And the only way this is possible is if industry and all of industry, big and small, the whole ecosystem starts sharing documents and documentation around, you know, red teaming or evals or multilingual benchmarks and things like that to come to some sort of consensus.

Sabina Chofu

Thanks so much. I’m really enjoying this positive vibe we’re going with. And, you know, that combination, I think it kind of links really nicely back to what Bella was saying around coalitions, you know, build on themes, right? It’s like where do we think we have common ground and what we think we can build on. So I really, really enjoyed that contribution. Rajesh, can I turn to you next? Because I did wonder what all this stuff means for, you know, in kind of smaller and developing economies. And maybe if you have any examples of shared standards, pooled resources, any of the stuff that Halak was talking about, public -private models, or anything that you’ve seen that looks promising, that looks like it could deliver.

Thank you.

Rajesh Nambia

You know, as we said, the moment you look at shade models, there are multiple reasons why we want to do this. And one, of course, as we’ve talked about, the cost involved in doing some of that. I think that itself is becoming cost prohibitive and hence there may not be even an option for many of the countries but to sort of have this shade model. We also find that in the regional compute consortiums that, you know, folks can potentially create and you often see examples of where, for example, a standard data set and stuff like that being shared by, not just by, you know, even within a country. It could be between government, academia, and then industry sort of sharing the same sort of data sets, making sure that they’re able to leverage that in some sense.

Compute is clearly something which continues. continues to be the, you know, we shared resources in many of the, even in India, for example, you know, our own AI mission has created this cluster where it can be broadly leveraged by both industry, academia, and the government in terms of ensuring that they’re able to get access to the right set of GPU, set of GPU, GPU forms, and ensuring that they’re able to use that and then take it forward. So, public -private sharing of data, certainly the compute consortium, and then cloud credits, I think that’s something which sovereigns have been able to work with the hyperscalers, especially in terms of getting a lot of, you know, cloud credits for the GPUs, especially, right, because, which is needed for even if you, it’s not about building a frontier model, but it’s even to leverage the frontier model, build some reasoning models on top of it, and ensuring that you’re able to build an application which is meaningful, not that every time you need a powerful GPU, but there are occasions where you definitely you would need and then hence you know using some of those cloud credits will become a big need and then of course when you switch to regulations and so on and ensuring that how do you make sure that even having a policy is something which is shared across you know you don’t want to reinvent the wheel every single time so do you have a method by which you could leverage the existing you know look at what is out there in the world and then sort of leverage it and then try to reuse it because what you don’t want is to have this 100 versions of the same thing with a few nuances here and there so that’s something which I think companies will try and create a model as well.

Sabina Chofu

Thank you so much and I’m gonna kind of turn over to

Audience

Yes. Yes. Looking forward to a truth transparency and accountability -driven world. It takes 30 years for FC files to come out in a place like America, the developed world. Is that the speed of the system till it collapses and till we start a new world? Are we resigned to that fate?

Sabina Chofu

Yeah, so I can’t really see the link between the Epstein files and the… 30 years since the world was destroyed by Aaron Mulder in 2001. You don’t do the truth to come out. So you don’t have the system speed. Yes. Sure. Thank you. So on… Just to kind of build on what Rajesh was saying there on kind of also the capability. So maybe if we move into a bit of cross -border cooperation and Bella, if I can maybe turn to you just to build on those points. Because obviously what we are seeing across the developing world in particular, often it’s kind of the institutional capacity that’s a bit of an issue there. Yeah. kind of doing all the engagement and all the investments and all the, you know, you kind of still run into.

What are some of the, and I saw you were taking notes furiously, so I’m sure you have reflections on what has been said so far. But also, what are some of the resources

Bella Wilkinson

dependencies, figure out what they want to invest in and what dependencies they’re willing to accept, wanting to build strong institutions, again, that can mainline AI directly into public service delivery, and as you said, enable cross -border cooperation, might take a step back and figure out which foreign capabilities or foreign services they’re willing to accept at some levels of the stack and where they’d like to invest in indigenous solutions. And I mentioned open source earlier because this has come up time and time again, and I’m sure it’s going to be absolutely no surprise to our audience here today. An example which has really stuck with me and Rafi, I’d be really interested in your thoughts on this, is the Southeast Asian Languages Under One Network model, so a multilingual sea lion LLM.

And this is something we’ve called, again, in a really interesting collaboration with AI Safety Asia, open models. With local adaptation, really balancing. again inputs from open source models potentially provided by foreign providers with adaptation to a local context and so i think leaving the summit what i’m really going to be interested in is i think this connection between drawing on i guess inputs from the open source community fine -tuning and locally adapting their contributions and then perhaps doing so not only in the service of again strong robust institutions at the national level who are ai ready but also on this kind of collective cross -border level i hope that makes sense

Sabina Chofu

it does and i’m gonna let rafi kind of uh fit into that as well because you’ve uh you’ve uh segwayed really nicely into into his uh part but also um if you can also touch upon feel free to react to what uh bella has said but also if you can also touch upon on on the what you’ve seen as best practice in international and cross -border collaboration maybe in healthcare climate resilience audience education anywhere you’ve seen good stories to tell please do share

Rafik Rikorian

i mean i do think a lot about the local fine -tuning and i think that that’s actually a really powerful concept of like we can all contribute to a core and then locally fine -tune for our values and our needs and i think that this has shown up in a bunch of different ways and i’m interested personally in all these alternative i don’t even want to call it our alternative but like other architectures that enable this to be possible because in some ways we’re kind of being being fed a regime that says it’s not possible but i think like architecturally it actually is in a bunch of different ways so i love the indigenous data model like looking at what different indigenous peoples have done around data collectives for their local areas so there’s a group of people for example in hawaii that is doing this for their genomic data because genomic data is really useful for pharmaceutical models and so like they’ve been looking for ways so that they can both monetize but also the provenance of their data as it goes through these pharmaceutical models.

So there are some professors out of UCSD starting to build actually what these data trusts could look like for Hawaiian people, and I think that that model could be replicated in lots of different parts of the world. Mozilla is actually attempting to do a bunch of this. So we’re creating something that we call the Mozilla Data Collaborative, or Collective, sorry. And what the Collective is meant to be is it’s meant to be a marketplace of ethically sourced but provenance -traced data sets so that you can bring your data. It will actually help you scrub it, clean it, et cetera, and also make sure you have the appropriate licenses on it so that people can come find the data sets that they want to train their models but make sure that attribution is given, compensation is given, et cetera.

So we’re literally in conversations with almost every radio station on the planet to try to get their recordings and their transcripts onto the marketplace, not for Mozilla to make money. In fact, we actually want the radio stations. to have a monetization path for all the data that they’re sitting on. simply have it scraped by big model providers to try to soak that into their systems. Instead, require that it be licensed, require that compensation be given. So I think there are models there. And on the computational side, I think there’s also a lot of interesting things showing up around federated learning opportunities. For those of you who don’t know what federated learning is, think of, you know, Google did this very famously when they trained their handwriting model across everyone’s Android phones.

So your handwriting is very personal and private. Your handwriting is on your device. And Google is able to train a handwriting recognition model that didn’t require them to get access to your data, because part of the training happened on your phone, and then the model wait through shipped it back up for centralized training. And I think something like that actually could be an interesting model for international collaboration of like, I can bring my data to the game, my healthcare data, my values data, my language data, but not have to release it to a different company, or sorry, a different country. Instead, allow you to do it in a different way. And I think that’s a really interesting model.

Thank you. of the training on my compute, on my infrastructure, and only ship model weights back up, and actually then create bigger models across borders and across geographies that could actually take into account different healthcare scenarios, different value systems, et cetera, in there. So I think that there are these interesting alternative architectures that we can actually start leaning into, these data trust models, these federated learning models, that actually could be massive enablers for cooperation and allow us to build these foundational things that we can then fine -tune and bring to our local context.

Sabina Chofu

Thanks so much, Rafi. That fine -tuning seems to be definitely a thing in this conversation, how you kind of built for different cultures and countries. And Halak, maybe I can come to you next, because we keep talking about kind of international cooperation and coordination. But I’m wondering, how do you translate that? played that, you know, chit -chat into actual skills, capacity, capability for emerging economies. And, you know, I mean, we are in a very international AI Impact Summit. So, you know, kind of how do we bring that from we talk about governance to all this international policy actually delivering for emerging economies?

Halak Shirastava

It’s a good question. I think let me start out by saying capacity building isn’t just, you know, running workshops or basically talking to regulators about, you know, this should be done. What it is is capacity building, I think, for emerging economies especially is critical because – hold on. Let me think. Okay, so emerging economies have unequal access to data, information, and technology, right? So what are we trying to solve for here in terms of capacity building? The first thing I would say is shared evidence. So what we need is we need players to help into this capacity building system with documents, results, performance, benchmarks, to lift up other players. That, I think, would be number one.

The second thing I think is key and sometimes overlooked is the value of, like, procurement policies. And I agree with Isabella. What if we had, like, an industry coalition, like a cross -border network, where they’re solving for procurement policies or procurement rules? And what this does is this brings in global players. So now what you’re doing is you’re opening up your country to different markets. The next thing I would say is, like, you know, a lot of – Let me think. Let me put it this way. So there are developers who develop the technology, and then there are deployers. They buy the technology, and they use it. So, for example, like a public sector agency. Why is it so – Thank you.

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. you you Thank you. Thank you. economist Frank Nagel has a report recently that approximately 24 billion U .S. dollars are being wasted by not switching to open source models right now. So the economics are starting to make a lot of sense. So I think once all these stars align, it becomes almost obvious what an answer could look like for local governments around open source AI models, et cetera. So I’m really excited for that in the next 12 to 18 months.

Sabina Chofu

Thank you. Rajesh?

Rajesh Nambia

No, I agree with both of what’s been said so far, but I also want to give a sense of I think it’s when you look at AI governance and people tend to sort of lead with regulatory regulation first. I believe that countries and especially the countries which we talked about in terms of more from an inclusion point of view, you’ve got to lead with innovation first mindset because I think regulation is required and certainly needed, but I think innovation is probably needed more in some sense. And also when you look at the AI governance, governance across all of that. we do while there could be horizontal governance which will apply to every AI systems I think the more meaningful governance that you’re going to find when you get into sectoral governance meaning when you look at the AI systems for health care and you’ll find there are the understanding of a harm in the health care segment is very different from financial services and so on so how do you get into those sectoral areas then you can have meaningful governance structure and last but not the least you need to have the right talent and people who can actually who understand all of this in both in public sector and people who are supposedly governing all of this that is something which is it’s not the talent in terms of broader AI model building and people who are building AI systems but how do you make sure that the talent in the governance space in the governments and people who are actually regulating it they don’t understand the real harm and then it’s going to be a bigger issue and especially when it comes to the you know the list of countries that we talked about always when you get deeper down the list you will find that the talent is going to be here and there and you will find that the talent is going to be here and there and you will find that be here and there and you will find that the talent is going to be here and there and you will find that the talent is going to be here and there and you will find that the talent issue in terms of understanding.

Sabina Chofu

Thank you. And, you know, as someone who lives in Brussels, I’ll make sure to take that message back. Halak.

Halak Shirastava

Okay, so what am I most excited about, I guess, in the next 12 months? I mean, in the last few days, you’ve seen companies really, really excited about AI, but what you’ve also seen is countries very excited about AI. So what does this mean in governance? It means that the community and the participation is only going to increase. I don’t see it going backwards. And so, as technology is evolving, more players are going to have a voice in the system, and the standards and the ITU bodies or the ISO bodies, and I think because of this convergence, we are going to, as society, just, like, increase our, like, literacy of not only AI, but technology, but also bring it into whatever we’re in, if we’re in the private sector, if we’re in the public sector.

And because of that, I think we’re going to have to Yeah, I think a lot of progress will be made in the next 12 months, and you’ll see it as it converges.

Sabina Chofu

Thank you so much. Thanks to all the panel. Thanks for being here, and enjoy the rest of your day. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (32)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“Sabina Chofu is the International Policy and Strategy Lead at TechUK, and TechUK is the sister association of NASCOM in the UK.”

The knowledge base lists Sabina Chofu as International Policy and Strategy Lead at TechUK and notes that TechUK is the sister association of NASCOM in the UK, confirming the report’s statement.

Additional Contextmedium

“India is creating public‑private compute consortia, shared GPU clusters such as the AI Mission compute cluster, and cloud‑credit schemes from hyperscalers to provide AI resources without each country having to build a frontier model from scratch.”

A recent Indian white‑paper described a national push to democratise AI infrastructure, treating compute, datasets and models as digital public goods and encouraging shared resources, which adds context to the reported compute‑consortium initiatives.

Additional Contextmedium

“The emerging “AI divide” is larger than the earlier digital divide because it concerns both agency and access.”

Discussion in the knowledge base about policy levers to bridge the AI divide highlights that the divide now encompasses issues of agency and access beyond the traditional digital‑access gap, providing additional nuance to the claim.

External Sources (110)
S1
Smart Regulation Rightsizing Governance for the AI Revolution — -Rajesh Nambia- President of NASCOM (National Association of Software and Service Companies in India)
S2
Smart Regulation Rightsizing Governance for the AI Revolution — Halak Shirastava from Cohere brought a private sector perspective emphasizing the practical importance of technical stan…
S3
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S4
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S5
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S6
Smart Regulation Rightsizing Governance for the AI Revolution — – Bella Wilkinson- Rafik Rikorian – Bella Wilkinson- Sabina Chofu
S7
Smart Regulation Rightsizing Governance for the AI Revolution — -Sabina Chofu- International Policy and Strategy Lead at TechUK (sister association of NASCOM in the UK)
S8
https://dig.watch/event/india-ai-impact-summit-2026/smart-regulation-rightsizing-governance-for-the-ai-revolution — But we do have… What I start with, just next to me here, Bella Wilkinson, who’s a research fellow on the Digital Socie…
S9
Smart Regulation Rightsizing Governance for the AI Revolution — -Rafik Rikorian- Chief Technology Officer for Mozilla
S10
https://dig.watch/event/india-ai-impact-summit-2026/smart-regulation-rightsizing-governance-for-the-ai-revolution — But we do have… What I start with, just next to me here, Bella Wilkinson, who’s a research fellow on the Digital Socie…
S11
What is it about AI that we need to regulate? — The Role of International Institutions in Setting Norms for Advanced TechnologiesThe discussions across IGF 2025 session…
S12
UNSC meeting: Regional arrangements for peace — Austia:Thank you, Mr. President, and thank you for organizing this open debate. As we move closer to the summit of the f…
S13
(Interactive Dialogue 2) Summit of the Future – General Assembly, 79th session — International Criminal Police Organization Interpol: Excellencies, ladies and gentlemen, we gather at a time when organ…
S14
Leaders TalkX: Partnership pivot: rethinking cooperation in the digital era — This comment elevated the entire discussion by acknowledging the elephant in the room – geopolitical tensions – while re…
S15
Day 0 Event #150 Digital Rights in Partnership Strategies for Impact — ## Accountability Mechanisms and Transparency The panellists identified several approaches to accountability, though th…
S16
Cutting through Cyber Complexity / DAVOS 2025 — Current regulation processes are too slow compared to the speed at which cyber attacks can occur and cause massive disru…
S17
Artificial Intelligence & Emerging Tech — Jennifer Chung:Thank you, Nazar. I actually do see two more questions from the Bangladesh Remote Hub. This is good. This…
S18
AI as critical infrastructure for continuity in public services — Data silos, lack of governance and insufficient data quality cause most pilots to stall before production. Without prope…
S19
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — And the big mindset shift that’s starting to occur is this notion that, you know, these aren’t just productivity tools. …
S20
WS #208 Democratising Access to AI with Open Source LLMs — The speaker mentions the need for GPU infrastructure and the high costs associated with it.
S21
https://dig.watch/event/india-ai-impact-summit-2026/building-public-interest-ai-catalytic-funding-for-equitable-compute-access — to be with us, so thank you. We are here because we believe in AI’s transformative potential, and I’m certain you’ve hea…
S22
Building Public Interest AI Catalytic Funding for Equitable Compute Access — And here, India is not waiting for permission. India is not waiting for permission. India is showing that it can be done…
S23
Global Perspectives on Openness and Trust in AI — And then exclusive partnerships and the systems being opaque. So those were the things identified in the market study. A…
S24
Driving Indias AI Future Growth Innovation and Impact — The innovate side really comes down to. Areas like skilling, which I know when Minister Chaudhry joins us, we will get i…
S25
Balancing innovation and oversight: AI’s future requires shared governance — At IGF 2024, day two in Riyadh, policymakers, tech experts, and corporate leaders discussed one of the most pressing dil…
S26
Conversation: 01 — Regulation perspective depends on each country’s development stage – countries should innovate first before heavily regu…
S27
How to make AI governance fit for purpose? — Innovation should be prioritized over excessive regulation
S28
Laying the foundations for AI governance — This discussion revealed both the substantial challenges in translating AI governance principles into practice and the s…
S29
Leveraging the UN system to advance global AI Governance efforts — The current difficulties in achieving consensus in multilateral systems underscore the necessity for inclusive negotiati…
S30
Main Session | Policy Network on Artificial Intelligence — The discussion highlighted the complex and multifaceted nature of AI governance challenges. While there was broad agreem…
S31
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — Speaker:It’s a hard question and also for the invitation to be part of this panel, I’m very glad to be here. I’m Vladimi…
S32
Diplomatic policy analysis — Digital divides:Not all countries have equal access to advanced analytical tools, perpetuating inequalities in diplomati…
S33
What policy levers can bridge the AI divide? — ## Forward-Looking Perspectives ## Key Challenges and Opportunities Lacina Kone: Before talking about the bridging of …
S34
A view on digital divide and economic development — Hence, even thoughICTs provide opportunities for economic growth and social development, they have the potential to excl…
S35
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — Since its adoption in May 2019, 48 countries and the European Union have adhered to the OECD Principles on Artificial In…
S36
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — The level of consensus among the speakers was relatively high, particularly on the benefits and potential applications o…
S37
AI Development Beyond Scaling: Panel Discussion Report — Choi advocates for AI democratization where AI reflects human knowledge and values, serves all humans rather than just t…
S38
Responsible AI for Shared Prosperity — The balance between open-source development and community sovereignty presents ongoing challenges. While open-source app…
S39
Better governance for fairer digital markets: unlocking the innovation potential and leveling the playing field (UNCTAD) — Interoperability emerges as a potential solution to prevent the monopolisation of functions and data by large tech compa…
S40
Closing remarks – Charting the path forward — Coherent and interoperable policy frameworks are needed to prevent fragmentation while enabling agile governance
S41
Welcome 2015 ‒ a year of cyber(in)security — Developing institutional and professional capacities is recognised in various forums as a precondition for successful im…
S42
Opening of the session — Capacity building should extend beyond the implementation of voluntary norms.
S43
Artificial intelligence — Despite their technical nature – or rather because of that – standards have an important role to play in bridging techno…
S44
Setting the Rules_ Global AI Standards for Growth and Governance — And it doesn’t have to be the frontier model labs only. It could be app developers and so on. A way to differentiate the…
S45
AI Meets Agriculture Building Food Security and Climate Resilien — “AI must be transparent, auditable, and explainable”[96]. “Without trust, scale will not happen”[99]. “based on open sta…
S46
Opening address of the co-chairs of the AI Governance Dialogue — 3. Establishing international technical standards that allow policy and regulation to remain flexible and agile Tomas L…
S47
WS #162 Overregulation: Balance Policy and Innovation in Technology — 3. Context-Specific Regulation James Nathan Adjartey Amattey: So thank you very much, Nicolas, for that introduction. …
S48
WS #214 Youth-Led Digital Futures: Integrating Perspectives and Governance — – Balancing innovation with regulation for emerging technologies Keith Andere stressed the importance of harmonization …
S49
How AI Drives Innovation and Economic Growth — And when I say incumbents, those firms that have more than 1 ,000 employees. In around 2000, 50 % of employees used to w…
S50
Emerging Markets: Resilience, Innovation, and the Future of Global Development — Countries should focus on collecting and sequencing their genetic and biodiversity data as valuable assets for future bi…
S51
Smart Regulation Rightsizing Governance for the AI Revolution — This comment is deeply insightful because it cuts through the optimistic summit rhetoric to present a stark geopolitical…
S52
Laying the foundations for AI governance — This comment introduced a different geopolitical perspective that complicated the discussion in important ways. While it…
S53
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — Bremmer argues that the rapid pace of AI development is outstripping the ability of governments and international instit…
S54
Building Sovereign and Responsible AI Beyond Proof of Concepts — Sovereignty challenges have become increasingly prominent, particularly given current geopolitical tensions. Questions a…
S55
Development of Cyber capacities in emerging economies | IGF 2023 Open Forum #6 — This Open Forum follows the dialogue already opened in the workshop at the WSIS Forum 2023 “Cybersecurity and cyber resi…
S56
What is it about AI that we need to regulate? — The Role of International Institutions in Setting Norms for Advanced TechnologiesThe discussions across IGF 2025 session…
S57
Comprehensive Discussion Report: The Future of Artificial General Intelligence — Current geopolitical tensions and adversarial relationships between major powers make scientific cooperation proposals u…
S58
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — Since its adoption in May 2019, 48 countries and the European Union have adhered to the OECD Principles on Artificial In…
S59
WS #208 Democratising Access to AI with Open Source LLMs — Developing countries face challenges in implementing open source AI due to limited infrastructure and technical expertis…
S60
Upskilling for the AI era: Education’s next revolution — The coalition’s approach prioritises accessibility and inclusion, with particular focus on reaching underserved and marg…
S61
Global AI Policy Framework: International Cooperation and Historical Perspectives — So global principles are very important, but implementation must account for national contexts and capacities, as you we…
S62
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — In conclusion, the UNESCO recommendation on AI ethics provides crucial guidance for global AI governance. By grounding A…
S63
How to make AI governance fit for purpose? — All speakers recognize that AI’s global nature requires international cooperation and coordination, though they may diff…
S64
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The level of disagreement is moderate but significant for implementation. While speakers share fundamental goals of resp…
S65
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S66
Building Public Interest AI Catalytic Funding for Equitable Compute Access — And here, India is not waiting for permission. India is not waiting for permission. India is showing that it can be done…
S67
Global Perspectives on Openness and Trust in AI — And then exclusive partnerships and the systems being opaque. So those were the things identified in the market study. A…
S68
Revitalising trust with AI: Boosting governance and public services — AI is reshaping public governance, offering innovative ways to enhance services and restore trust in institutions. The d…
S69
Laying the foundations for AI governance — This discussion revealed both the substantial challenges in translating AI governance principles into practice and the s…
S70
Main Session | Policy Network on Artificial Intelligence — The discussion highlighted the complex and multifaceted nature of AI governance challenges. While there was broad agreem…
S71
Chinese leading AI expert argues for AI governance by the UN — The rapid development of AI technology has outpaced existing regulatory frameworks, creating challenges in areas such as…
S72
Smart Regulation Rightsizing Governance for the AI Revolution — This comment is deeply insightful because it cuts through the optimistic summit rhetoric to present a stark geopolitical…
S73
What policy levers can bridge the AI divide? — ## Key Challenges and Opportunities Lacina Kone: Before talking about the bridging of AI, bridging the gap of the AI, t…
S74
Bridging the Digital Divide for Transition to a Greener Economy — Mehmed Sait Akman:Thank you very much. Let me express my thank you very much again and for your kind invitation to this …
S75
WS #462 Bridging the Compute Divide a Global Alliance for AI — The discussion revealed that the challenge extends beyond inequitable distribution to an overall supply-demand gap affec…
S76
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — We deeply appreciate the kind hospitality we have received this week in India at the India AI Impact Summit. Costa Rica …
S77
A view on digital divide and economic development — Hence, even thoughICTs provide opportunities for economic growth and social development, they have the potential to excl…
S78
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — The level of consensus among the speakers was relatively high, particularly on the benefits and potential applications o…
S79
Responsible AI for Shared Prosperity — The balance between open-source development and community sovereignty presents ongoing challenges. While open-source app…
S80
AI Development Beyond Scaling: Panel Discussion Report — Choi advocates for AI democratization where AI reflects human knowledge and values, serves all humans rather than just t…
S81
Better governance for fairer digital markets: unlocking the innovation potential and leveling the playing field (UNCTAD) — Access to open markets through regulation is highlighted as beneficial for small messaging companies. This provides oppo…
S82
Setting the Rules_ Global AI Standards for Growth and Governance — Etienne Chaponniere from Qualcomm brought a unique perspective as a chipset provider, emphasising the democratising pote…
S83
Omnipresent Smart Wireless: Deploying Future Networks at Scale — Harmonization between stakeholders is essential for the successful deployment of 6G. Standardization, scalability, and i…
S84
Welcome 2015 ‒ a year of cyber(in)security — Developing institutional and professional capacities is recognised in various forums as a precondition for successful im…
S85
Dynamic Coalition Collaborative Session — Dr. Muhammad Shabbir: Thank you very much, Rajendra, and thank you very much to my colleagues who have spoken before me….
S86
Agenda item 6: other matters — Japan: Thank you, Mr. Chair. Japan believes that capacity building is essential to maintaining peace and stability and…
S87
WS #187 Bridging Internet AI Governance From Theory to Practice — The discussion maintained a thoughtful but increasingly cautious tone throughout. It began optimistically, with speakers…
S88
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — The discussion maintained a thoughtful but somewhat cautious tone throughout, with speakers acknowledging both opportuni…
S89
Defying Cognitive Atrophy in the Age of AI: A World Economic Forum Stakeholder Dialogue — The discussion began with a cautiously optimistic tone, acknowledging both opportunities and risks. However, the tone be…
S90
What is it about AI that we need to regulate? — The Role of International Institutions in Setting Norms for Advanced TechnologiesThe discussions across IGF 2025 session…
S91
Charting New Horizons: Gender Equality in Supply Chains – Challenges and Opportunities — A conscientious request for clarity and specificity was also apparent, underlining the need for concrete, actionable pla…
S92
Leaders TalkX: Digital Advancing Sustainable Development: A Trusted Connected World — The unwaveringly positive sentiment underlines a strong conviction in the potential of collective and inclusive efforts …
S93
Open Forum #44 Building Trust with Technical Standards and Human Rights — The tone was largely collaborative and solution-oriented. Speakers approached the topic from different perspectives but …
S94
Summit Opening Session — The declaration was developed through an inclusive consultation process within the International Advisory Body on Submar…
S95
Open Forum #52 Strengthening Information Integrity Through Coalitions — The discussion maintained a professional and collaborative tone throughout, characterized by urgency about the scale of …
S96
Powering the Technology Revolution / Davos 2025 — The tone was generally optimistic and forward-looking, with panelists highlighting opportunities for innovation and prog…
S97
Next-Gen Industrial Infrastructure / Davos 2025 — The tone was largely optimistic and forward-looking, with speakers enthusiastically sharing their visions and initiative…
S98
Building Population-Scale Digital Public Infrastructure for AI — The tone is optimistic and collaborative throughout, with speakers sharing concrete examples of successful implementatio…
S99
Governments, Rewired / Davos 2025 — The overall tone was optimistic and forward-looking, with speakers highlighting the transformative potential of technolo…
S100
Flexibility 2.0 / Davos 2025 — The panel discussion provided a comprehensive exploration of the gig economy’s impact on the future of work. While ackno…
S101
Opening of the session — – Addressing the technological divide between developed and developing countries Chair: Thank you very much, Belgium, …
S102
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-empowerment_-driving-change-and-inclusion — And it’s impossible to regulate this. It’s impossible to regulate this because it’s everywhere. So the only way we are a…
S103
https://dig.watch/event/india-ai-impact-summit-2026/ai-that-empowers-safety-growth-and-social-inclusion-in-action-2 — I mean, the high impact use case can have more investment, more focus versus a low risk, right? I think that’s the first…
S104
International Cooperation for AI & Digital Governance | IGF 2023 Networking Session #109 — This comprehensive approach aims to understand the societal, economic, and ethical aspects of this advancing technology….
S105
Webinar session — Vera Toro argues that achieving consensus during a period when multilateralism faces widespread questioning serves as im…
S106
The Geopolitics of Materials: Critical Mineral Supply Chains and Global Competition — Mosako argues that development finance institutions like hers can bridge the gap between regions with different comparat…
S107
AI race shows diverging paths for China and the US — The US administration’s new AI action plan frames global development as anAI racewith a single winner. Officials argue A…
S108
Africa’s Prospects in the New Global Economy: A Comprehensive Analysis from Davos — Johann Jurie Strydom from Old Mutual highlighted opportunities for financial inclusion through digital platforms, noting…
S109
TradeTech’s Trillion-Dollar Promise — Barriers on data and technology side affect emerging economies harder. The inability to connect and create necessary in…
S110
New plan outlines how India will democratise AI infrastructure — Indiais moving to rebalance access to AI infrastructureas part of a new national push to close gaps in computing power a…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
B
Bella Wilkinson
2 arguments155 words per minute979 words377 seconds
Argument 1
Global consensus on AI governance is unrealistic; focus on partial alignment via issue‑specific coalitions
EXPLANATION
Bella argues that achieving worldwide agreement on AI rules is impossible in the current geopolitical climate. Instead, she suggests concentrating on limited, issue‑focused coalitions that can later be scaled through multilateral formats.
EVIDENCE
She states that “Global consensus on how to govern AI is a no-go” and that “partial alignment on priority issue areas is possible” and recommends supporting smaller gatherings that can be scaled later [26-28].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bella’s view that worldwide AI consensus is a “no-go” and that issue-specific coalitions are feasible is echoed in the Smart Regulation commentary (compute and coalition challenges) [S1], the discussion on how to bring minimally aligned states together via coalitions [S8], and the trust-building partnership pivot noted in the Leaders TalkX summary [S14].
MAJOR DISCUSSION POINT
Realism of Global AI Governance and Need for Coalitions
AGREED WITH
Sabina Chofu
DISAGREED WITH
Rajesh Nambia
Argument 2
Multilateral institutions can act as brokers, but trusted mechanisms are needed to bring rivals together
EXPLANATION
Bella notes that traditional multilateral bodies have historically mediated complex governance issues, but today they lack the trust needed to convene competing powers. She proposes building coalitions around trusted mechanisms to overcome this gap.
EVIDENCE
She explains that “multilateral institutions in the past have been brokers, mediators and scalers of consensus” but now the challenge is “how on earth do we bring rivals and competitors around the same table?” and suggests coalition building around trusted approaches [36-39].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of international institutions as brokers and the need for trusted mechanisms are highlighted in the analysis of norm-setting bodies for advanced technologies [S11], the question of convening rival states through trusted approaches [S8], and the emphasis on trust-building partnerships in the Leaders TalkX report [S14].
MAJOR DISCUSSION POINT
Realism of Global AI Governance and Need for Coalitions
S
Sabina Chofu
1 argument142 words per minute1209 words508 seconds
Argument 1
Coalition building is the most pragmatic path given current geopolitical tensions
EXPLANATION
Sabina agrees that the world lacks the multilateral cooperation needed for AI governance and highlights coalition building as the realistic way forward. She frames it as the best hope under present conditions.
EVIDENCE
She says, “you have talked about coalition building, and it’s probably the best we can hope for in the world as it is” after Bella’s remarks [48-50].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sabina’s endorsement of coalition building aligns with the Smart Regulation commentary that records the remark “you have talked about coalition building, and it’s probably the best we can hope for” [S1].
MAJOR DISCUSSION POINT
Realism of Global AI Governance and Need for Coalitions
AGREED WITH
Bella Wilkinson
A
Audience
1 argument122 words per minute53 words26 seconds
Argument 1
Current transparency and accountability processes are too slow, demanding faster mechanisms
EXPLANATION
An audience member points out that existing transparency mechanisms, such as the release of investigative files, take decades, which is unacceptable for rapidly evolving AI risks. They call for a speedier system to avoid systemic collapse.
EVIDENCE
The audience remarks, “It takes 30 years for FC files to come out in a place like America… Is that the speed of the system till it collapses?” highlighting the slowness of current processes [135-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Audience concerns about slow filing cycles are documented in the Smart Regulation piece (30-year delays) [S1], reinforced by the Day 0 accountability mechanisms discussion [S15], and by the DAVOS observation that regulation lags behind fast-moving cyber threats [S16].
MAJOR DISCUSSION POINT
Realism of Global AI Governance and Need for Coalitions
R
Rajesh Nambia
7 arguments195 words per minute1953 words598 seconds
Argument 1
Severe compute access gap hampers AI development in smaller and developing economies
EXPLANATION
Rajesh emphasizes that limited access to high‑performance compute resources puts developing nations at a significant disadvantage compared with the US and China. He warns that without shared or pooled compute, these economies will fall further behind.
EVIDENCE
He describes the “limited access to the broader compute facility” as a major barrier for smaller countries and stresses the need for pooling resources [57-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Rajesh’s point on compute scarcity is supported by the Smart Regulation analysis of limited compute for smaller economies [S1], the high GPU cost barrier noted in the open-source LLM session [S20], the observation of a global “compute divide” [S21], and India’s public GPU infrastructure example [S22].
MAJOR DISCUSSION POINT
Barriers to AI Adoption in Developing Nations
DISAGREED WITH
Bella Wilkinson
Argument 2
Data silos, poor data quality, and inadequate infrastructure (power, connectivity) limit AI potential
EXPLANATION
Rajesh points out that data in many developing regions is fragmented across government and departmental silos, often of low quality, and that unreliable power and connectivity further restrict AI projects. These factors together hinder the creation of representative AI models.
EVIDENCE
He notes that “the data itself is very siloed” and that “power is going to be a huge foundation” while also mentioning connectivity challenges despite satellite options [61-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The challenges of data silos and low-quality data are highlighted in the AI as critical infrastructure briefing [S18], while connectivity gaps in developing regions are discussed in the Emerging Tech Q&A [S17].
MAJOR DISCUSSION POINT
Barriers to AI Adoption in Developing Nations
Argument 3
High expense of GPU clusters and lack of clean power further exacerbate the divide
EXPLANATION
Rajesh argues that even when purchasing power parity is considered, the cost of assembling GPU clusters needed for meaningful models is prohibitive. Additionally, the requirement for clean energy adds extra financial and logistical burdens for developing economies.
EVIDENCE
He explains that “the expense of GPU clusters” is a “very different ballgame” and that “clean power” adds an additional tax for the developing world [59-60][66-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The prohibitive cost of GPU clusters is examined in the open-source LLM discussion on GPU expenses [S20], and the broader compute-divide analysis underscores financial hurdles for clean-power-dependent hardware [S21].
MAJOR DISCUSSION POINT
Barriers to AI Adoption in Developing Nations
Argument 4
Public‑private compute consortia and cloud‑credit programs can pool resources and give broader access to AI compute
EXPLANATION
Rajesh cites examples from India where government, academia, and industry share GPU resources through a national AI mission, and where cloud‑credit arrangements with hyperscalers help smaller players access compute without owning expensive hardware.
EVIDENCE
He mentions “our own AI mission has created this cluster” that is shared across sectors and notes that “sovereigns have been able to work with the hyperscalers… to get cloud credits for GPUs” [130-133].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Public-private consortia and cloud-credit schemes are exemplified by India’s AI mission and hyperscaler cloud-credit arrangements described in the equitable compute access briefing [S22].
MAJOR DISCUSSION POINT
Models for Cross‑Border Collaboration
AGREED WITH
Rafik Rikorian
Argument 5
Developing talent for both AI innovation and governance is essential for effective sectoral oversight
EXPLANATION
Rajesh stresses that countries need skilled personnel not only to build AI systems but also to understand and regulate them, especially in sector‑specific contexts such as health or finance. He warns that talent gaps will undermine governance effectiveness.
EVIDENCE
He says, “you need the right talent and people who can actually understand… both in public sector and people who are supposedly governing” and highlights the uneven talent distribution across countries [214-219].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of AI talent pipelines is emphasized in the India AI growth and skilling report [S24] and reinforced by the Smart Regulation note on uneven talent distribution across countries [S1].
MAJOR DISCUSSION POINT
Capacity Building and Skill Development for Emerging Economies
AGREED WITH
Halak Shirastava
Argument 6
Emerging economies should prioritize innovation and pilot projects before imposing heavy regulation
EXPLANATION
Rajesh argues that an innovation‑first mindset allows countries to build capacity and demonstrate value before layering restrictive regulations, which could otherwise stifle growth.
EVIDENCE
He states, “countries… have to lead with innovation first mindset because regulation is required but innovation is probably needed more” [213-215].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The innovation-first stance is advocated in the IGF balancing-innovation session [S25], the development-stage regulation perspective paper [S26], and the “innovation over excessive regulation” commentary [S27].
MAJOR DISCUSSION POINT
Innovation‑First Approach and Sector‑Specific Governance
DISAGREED WITH
Halak Shirastava
Argument 7
Sector‑specific governance (healthcare, finance, etc.) yields more meaningful oversight than blanket horizontal rules
EXPLANATION
Rajesh contends that AI risks differ across domains, so governance frameworks should be tailored to each sector rather than applying a one‑size‑fits‑all approach. This enables more precise risk mitigation and accountability.
EVIDENCE
He explains that “horizontal governance” is less meaningful than “sectoral governance” where harms differ, citing healthcare versus financial services [214-219].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The case for sector-tailored oversight versus horizontal rules is discussed in the IGF balancing-innovation session, which stresses domain-specific governance needs [S25].
MAJOR DISCUSSION POINT
Innovation‑First Approach and Sector‑Specific Governance
R
Rafik Rikorian
4 arguments189 words per minute1391 words439 seconds
Argument 1
An open‑source stack (Linux analogy) can provide shared AI infrastructure while preserving national sovereignty
EXPLANATION
Rafik draws on the Linux model, where a common code base underpins billions of devices, to illustrate how a shared AI stack could be collaboratively developed while allowing each nation to retain sovereign control over its implementation.
EVIDENCE
He explains that “every computer on the planet runs Linux” and that “every country… contributes to the single code base” while still being able to fine-tune their own versions [70-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Rafik’s Linux analogy aligns with the Global Perspectives on Openness and Trust report that highlights openness as a foundation for shared AI infrastructure while respecting sovereignty [S23], and with the open-source LLM discussion on shared stacks [S20].
MAJOR DISCUSSION POINT
Open Source, Standards, and Shared Infrastructure as Enablers
AGREED WITH
Halak Shirastava
DISAGREED WITH
Halak Shirastava
Argument 2
Developing open standards and interfaces enables global collaboration and digital sovereignty
EXPLANATION
Rafik argues that defining open standards and interfaces, similar to the transition from proprietary stacks to the LAMP stack, would let diverse actors contribute to a common foundation while customizing it to local values, thereby supporting digital sovereignty.
EVIDENCE
He discusses how “the LAMP stack” enabled openness, and calls for “open standards and open interfaces” to let countries own their version of the stack [91-96].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The push for open standards and interfaces is reinforced by the Global Perspectives on Openness report that calls for interoperable standards to foster collaboration [S23] and by the analysis of international norm-setting bodies that stress open standards for digital sovereignty [S11].
MAJOR DISCUSSION POINT
Open Source, Standards, and Shared Infrastructure as Enablers
AGREED WITH
Halak Shirastava
Argument 3
Data trusts offer ethically sourced, provenance‑tracked datasets for shared use and fair compensation
EXPLANATION
Rafik describes Mozilla’s Data Collaborative, a marketplace where data contributors retain provenance, licensing, and receive compensation, enabling ethically sourced datasets to be shared across AI developers.
EVIDENCE
He outlines the Mozilla Data Collaborative as “a marketplace of ethically sourced but provenance-tracked data sets” that ensures attribution and compensation, and mentions outreach to radio stations worldwide [160-166].
MAJOR DISCUSSION POINT
Models for Cross‑Border Collaboration
AGREED WITH
Rajesh Nambia
Argument 4
Federated learning allows cross‑border model training without exposing raw data, respecting privacy and sovereignty
EXPLANATION
Rafik explains federated learning as a technique where model training occurs locally on devices, with only aggregated model updates sent back, enabling collaboration across borders without sharing sensitive raw data.
EVIDENCE
He provides the example of Google’s handwriting model trained on phones, describing how “training happened on your phone” and only model weights were shipped back, preserving privacy [167-174].
MAJOR DISCUSSION POINT
Models for Cross‑Border Collaboration
H
Halak Shirastava
3 arguments69 words per minute931 words798 seconds
Argument 1
Sharing evidence, benchmarks, and best‑practice documents builds technical capacity across borders
EXPLANATION
Halak stresses that providing shared documentation—such as performance benchmarks and evaluation reports—helps lift less‑resourced actors and creates a common evidence base for capacity building.
EVIDENCE
She says, “we need players to help into this capacity building system with documents, results, performance, benchmarks, to lift up other players” [188-191].
MAJOR DISCUSSION POINT
Capacity Building and Skill Development for Emerging Economies
AGREED WITH
Rajesh Nambia
Argument 2
Capacity building must include procurement policy frameworks and open‑source adoption, not just workshops
EXPLANATION
Halak argues that effective capacity building should go beyond training sessions to incorporate procurement policies that enable access to open‑source AI tools and create cross‑border industry coalitions.
EVIDENCE
She notes that “the value of procurement policies” and proposes an “industry coalition… solving for procurement policies” to open markets for countries [191-196].
MAJOR DISCUSSION POINT
Capacity Building and Skill Development for Emerging Economies
AGREED WITH
Bella Wilkinson, Sabina Chofu
Argument 3
Evolving technical standards (NIST, ISO) provide flexible, international frameworks that can adapt to rapid AI advances
EXPLANATION
Halak points out that existing technical standards bodies such as NIST and ISO are developing adaptable frameworks that can accommodate fast‑moving AI technologies, offering a viable path for global alignment.
EVIDENCE
She references “technical standards” like NIST and ISO, describing them as “flexible and evolving” and useful for startups to avoid costly country-by-country compliance [102-106].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The relevance of adaptable technical standards is documented in the discussion of international standards bodies (NIST, ISO) offering flexible frameworks for AI governance [S11] and in the Global Perspectives report on evolving standards for AI [S23].
MAJOR DISCUSSION POINT
Innovation‑First Approach and Sector‑Specific Governance
AGREED WITH
Rafik Rikorian
DISAGREED WITH
Rafik Rikorian
Agreements
Agreement Points
Coalition building is the most pragmatic path for AI governance given current geopolitical tensions
Speakers: Bella Wilkinson, Sabina Chofu, Halak Shirastava
Global consensus on AI governance is unrealistic; focus on partial alignment via issue‑specific coalitions Coalition building is the most pragmatic path given current geopolitical tensions Capacity building must include procurement policy frameworks and open‑source adoption, not just workshops
All three speakers agree that a full global consensus on AI governance is unlikely and that forming issue-specific or industry coalitions, supported by appropriate procurement policies, is the realistic way forward [26-28][36-39][48-50][191-196].
POLICY CONTEXT (KNOWLEDGE BASE)
This view echoes calls for coalition-building around specific AI issues as a pragmatic alternative to universal treaties, highlighted in discussions on geopolitical constraints and the need for flexible cooperation [S51][S53][S60].
Open standards and shared open‑source infrastructure can enable global collaboration while preserving national sovereignty
Speakers: Rafik Rikorian, Halak Shirastava
An open‑source stack (Linux analogy) can provide shared AI infrastructure while preserving national sovereignty Developing open standards and interfaces enables global collaboration and digital sovereignty Evolving technical standards (NIST, ISO) provide flexible, international frameworks that can adapt to rapid AI advances
Both speakers advocate for open, interoperable standards and open-source stacks as a foundation for collaborative AI development that respects sovereignty [70-78][91-96][102-106].
POLICY CONTEXT (KNOWLEDGE BASE)
Standard-developing organisations argue that open technical standards bridge technology and policy, underpinning regulatory frameworks while allowing nations to retain control, a stance reflected in multiple policy briefs on AI standards and sovereignty [S43][S44][S46][S54].
Capacity building through shared evidence, benchmarks and talent development is essential for emerging economies
Speakers: Halak Shirastava, Rajesh Nambia
Sharing evidence, benchmarks, and best‑practice documents builds technical capacity across borders Developing talent for both AI innovation and governance is essential for effective sectoral oversight
Both emphasize that providing shared documentation and developing skilled personnel are key to raising AI capacity in less-resourced countries [188-191][214-219].
POLICY CONTEXT (KNOWLEDGE BASE)
Capacity-building is repeatedly identified as critical for emerging economies, featuring in IGF forums on cybersecurity, African innovation-regulation balance, and AI upskilling initiatives [S55][S48][S60].
Public‑private compute consortia and data‑trust marketplaces can pool resources to broaden AI access
Speakers: Rajesh Nambia, Rafik Rikorian
Public‑private compute consortia and cloud‑credit programs can pool resources and give broader access to AI compute Data trusts offer ethically sourced, provenance‑tracked datasets for shared use and fair compensation
Both propose collaborative models-compute sharing consortia and data-trust marketplaces-to lower barriers for AI development in developing regions [130-133][160-166].
POLICY CONTEXT (KNOWLEDGE BASE)
Reports on compute scarcity in low-resource settings and public-sector compute programmes illustrate how public-private consortia and data-trust models can expand AI access [S59][S66][S65][S67].
Global AI governance consensus is unrealistic in the current geopolitical climate
Speakers: Bella Wilkinson, Sabina Chofu
Global consensus on AI governance is unrealistic; focus on partial alignment via issue‑specific coalitions Coalition building is the most pragmatic path given current geopolitical tensions
Both speakers concur that achieving worldwide AI governance agreement is a ‘no-go’, and that partial, issue-focused alignment is the viable alternative [26-28][48-50].
POLICY CONTEXT (KNOWLEDGE BASE)
Analysts note that current geopolitical tensions make a universal AI governance treaty unrealistic, advocating instead for issue-specific coalitions and acknowledging disagreement over a global governance structure [S51][S53][S57][S64].
Similar Viewpoints
Both see coalition building around specific issues as the realistic way to advance AI governance amid geopolitical rivalry [26-28][48-50].
Speakers: Bella Wilkinson, Sabina Chofu
Global consensus on AI governance is unrealistic; focus on partial alignment via issue‑specific coalitions Coalition building is the most pragmatic path given current geopolitical tensions
Both argue that open, interoperable standards and open‑source foundations are essential for collaborative, sovereign‑respecting AI development [70-78][91-96][102-106].
Speakers: Rafik Rikorian, Halak Shirastava
An open‑source stack (Linux analogy) can provide shared AI infrastructure while preserving national sovereignty Developing open standards and interfaces enables global collaboration and digital sovereignty Evolving technical standards (NIST, ISO) provide flexible, international frameworks that can adapt to rapid AI advances
Both stress that capacity building must combine shared technical evidence with development of skilled personnel to enable effective AI use and regulation in emerging economies [188-191][214-219].
Speakers: Halak Shirastava, Rajesh Nambia
Sharing evidence, benchmarks, and best‑practice documents builds technical capacity across borders Developing talent for both AI innovation and governance is essential for effective sectoral oversight
Both propose collaborative resource‑sharing mechanisms—whether compute or data—to lower entry barriers for AI development in less‑resourced settings [130-133][160-166].
Speakers: Rajesh Nambia, Rafik Rikorian
Public‑private compute consortia and cloud‑credit programs can pool resources and give broader access to AI compute Data trusts offer ethically sourced, provenance‑tracked datasets for shared use and fair compensation
Both recognize that, given the impossibility of universal consensus, open‑source models and issue‑specific coalitions can provide practical pathways for cooperation [26-28][70-78].
Speakers: Bella Wilkinson, Rafik Rikorian
Global consensus on AI governance is unrealistic; focus on partial alignment via issue‑specific coalitions An open‑source stack (Linux analogy) can provide shared AI infrastructure while preserving national sovereignty
Unexpected Consensus
Open standards as a trust‑building mechanism for AI governance
Speakers: Rafik Rikorian, Halak Shirastava
Developing open standards and interfaces enables global collaboration and digital sovereignty Evolving technical standards (NIST, ISO) provide flexible, international frameworks that can adapt to rapid AI advances
It is noteworthy that an open-source technologist and a policy lead converge on the importance of open, evolving standards-Rafik from a Linux-style ecosystem perspective and Halak from a standards-body policy perspective-highlighting a cross-disciplinary agreement on standards as a cornerstone for trustworthy AI collaboration [91-96][102-106].
POLICY CONTEXT (KNOWLEDGE BASE)
Trust in AI systems is linked to transparent, auditable standards; multiple sources cite open standards as a key trust-building tool within AI governance frameworks [S43][S45][S46][S63].
Overall Assessment

The panel shows a clear convergence on three pillars: (1) coalition building and issue‑specific alignment as the pragmatic route for AI governance; (2) the adoption of open, interoperable standards and open‑source infrastructure to preserve sovereignty while enabling collaboration; (3) capacity building through shared evidence, benchmarks and talent development, complemented by public‑private resource‑sharing mechanisms. While there is agreement that a universal global consensus is unattainable, participants differ on the balance between innovation‑first approaches and regulatory frameworks.

Moderate to high consensus on practical cooperation mechanisms (coalitions, open standards, capacity building) but low consensus on the feasibility of a single global governance regime, implying that future policy work should focus on building issue‑specific coalitions, open‑source ecosystems, and shared capacity‑building initiatives.

Differences
Different Viewpoints
Sequencing of innovation and regulation for emerging economies
Speakers: Rajesh Nambia, Halak Shirastava
Emerging economies should prioritize innovation and pilot projects before imposing heavy regulation Capacity building must include procurement policy frameworks and evolving technical standards (NIST, ISO) to avoid costly country‑by‑country compliance
Rajesh argues that countries need an “innovation-first mindset” and should lead with innovation before regulation, suggesting regulation can stifle growth [213-215]. Halak counters that effective capacity building requires early adoption of flexible technical standards and procurement policies to open markets and avoid expensive compliance, implying that regulatory frameworks are essential from the start [102-106][191-196].
POLICY CONTEXT (KNOWLEDGE BASE)
African policy discussions stress the need to balance rapid AI innovation with proportionate regulation, highlighting sequencing challenges for emerging economies [S48][S64].
Preferred mechanism for shared AI infrastructure and governance
Speakers: Rafik Rikorian, Halak Shirastava
An open‑source stack (Linux analogy) can provide shared AI infrastructure while preserving national sovereignty Evolving technical standards (NIST, ISO) provide flexible, international frameworks that can adapt to rapid AI advances
Rafik promotes an open-source model, likening AI to the Linux ecosystem where a common code base is collaboratively built and each nation fine-tunes its own version, also describing Mozilla’s data collaborative as a marketplace for ethically sourced data [70-78][160-166]. Halak emphasizes the role of formal, evolving technical standards such as NIST and ISO to give startups a flexible compliance path and to avoid costly country-by-country regulation [102-106]. The two propose different primary enablers – open-source community versus standards bodies.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on the optimal mechanism point to open-source models, public compute platforms, and data-trust marketplaces as leading proposals for shared AI infrastructure [S59][S66][S67].
Primary barrier to AI adoption in developing nations: compute access vs. coalition building
Speakers: Rajesh Nambia, Bella Wilkinson
Severe compute access gap hampers AI development in smaller and developing economies Global consensus on AI governance is unrealistic; focus on partial alignment via issue‑specific coalitions
Rajesh highlights limited access to high-performance compute and the high expense of GPU clusters as a major obstacle, calling for public-private consortia and cloud-credit programs to pool resources [57-60][130-133]. Bella argues that a global consensus is a “no-go” and that progress should come from building issue-specific coalitions that can later be scaled, placing less emphasis on compute provision and more on governance mechanisms [26-29][38-44].
POLICY CONTEXT (KNOWLEDGE BASE)
Studies identify limited compute capacity as a primary barrier, while coalition-building approaches are promoted to mitigate resource gaps, underscoring the tension between these factors [S59][S65][S66][S60].
Unexpected Differences
Speed of transparency and accountability mechanisms
Speakers: Audience, Sabina Chofu
Current transparency and accountability processes are too slow, demanding faster mechanisms Sabina’s response dismisses the comment and introduces unrelated references, showing a lack of engagement with the concern
The audience points out that it takes decades for investigative files to be released, calling for a faster system [135-138]. Sabina replies with unrelated remarks about “Aaron Mulder” and does not address the speed issue, indicating an unexpected disconnect between audience expectations and panel response [140-144].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy frameworks such as the OECD AI Principles call for timely transparency and accountability, but stakeholders note divergent views on implementation speed and enforcement [S45][S64][S58].
Overall Assessment

The panel largely concurs that a universal AI governance consensus is unattainable and that coalition‑building is essential. However, substantive disagreements emerge around the sequencing of innovation versus regulatory frameworks, the preferred technical mechanism for shared AI infrastructure (open‑source versus formal standards), and the primary barrier to AI adoption (compute access versus coalition‑driven governance). An unexpected tension appears between audience expectations for rapid transparency and the panel’s limited engagement with that demand.

Moderate to high: while there is consensus on the need for cooperation, the differing views on how to operationalize capacity building, infrastructure sharing, and regulatory sequencing could impede coordinated action, especially for emerging economies seeking concrete pathways.

Partial Agreements
Both agree that full global AI governance is unattainable at present and that building coalitions around specific issues is the most realistic way forward, even though they do not dispute the need for such coalitions [26-29][48-50].
Speakers: Bella Wilkinson, Sabina Chofu
Global consensus on AI governance is unrealistic; focus on partial alignment via issue‑specific coalitions Coalition building is the most pragmatic path given current geopolitical tensions
Both aim to increase AI capacity in developing regions through shared resources, but Rajesh focuses on institutional consortia and cloud credits, whereas Rafik emphasizes open‑source software stacks and data trusts as the sharing mechanism [130-133][70-78].
Speakers: Rajesh Nambia, Rafik Rikorian
Public‑private compute consortia and cloud‑credit programs can pool resources and give broader access to AI compute An open‑source stack (Linux analogy) can provide shared AI infrastructure while preserving national sovereignty
Takeaways
Key takeaways
Global consensus on AI governance is unrealistic in the current geopolitical climate; focus should shift to issue‑specific coalitions and partial alignment. Multilateral institutions can act as brokers, but trusted mechanisms and coalition‑building are needed to bring rival states together. Developing nations face a severe AI divide driven by limited compute access, data silos, poor data quality, and inadequate infrastructure (power, connectivity). Open‑source models and shared technical standards (e.g., Linux analogy) can provide common infrastructure while preserving national sovereignty. Examples such as the Southeast Asian Languages Under One Network illustrate how open‑source LLMs can be locally fine‑tuned for language and cultural relevance. Data trusts and federated‑learning architectures are promising models for cross‑border collaboration that respect data provenance and privacy. Public‑private compute consortia, cloud‑credit programs, and shared GPU resources can help pool scarce compute capacity. Capacity building must go beyond workshops to include sharing of evidence, benchmarks, procurement‑policy frameworks, and open‑source adoption guidance. Emerging economies should adopt an innovation‑first approach and develop sector‑specific governance (healthcare, finance, etc.) rather than relying solely on horizontal regulation. Evolving technical standards bodies (NIST, ISO, ITU) offer flexible, international frameworks that can adapt to rapid AI advances.
Resolutions and action items
Proposal to form issue‑specific coalitions (e.g., verification, hardware risk mitigation, anonymised usage data) that can later be scaled through multilateral formats. Suggestion to create or expand public‑private compute consortia and cloud‑credit schemes to provide shared GPU resources for developing countries. Call for the development of open standards and open interfaces for AI models to enable a LAMP‑like stack for AI. Recommendation to establish data‑trust marketplaces (e.g., Mozilla Data Collaborative) that ensure provenance, licensing, and fair compensation for data contributors. Encouragement to adopt federated‑learning approaches for cross‑border model training without exposing raw data. Action item to share evidence, performance benchmarks, and best‑practice documentation internationally to build technical capacity in emerging economies. Suggestion to coordinate procurement‑policy networks across countries to streamline acquisition of open‑source AI solutions.
Unresolved issues
How to create trusted, neutral mechanisms that can reliably bring rival states (e.g., US, China) into the same governance discussions. Funding models and governance structures for large‑scale compute pooling and cloud‑credit distribution. Specific pathways for scaling open‑source AI models while ensuring they meet diverse regulatory and cultural requirements. Details of implementing federated‑learning frameworks across jurisdictions with differing data‑privacy laws. Concrete steps for building and retaining AI talent (both technical and governance) in smaller economies. How sector‑specific governance frameworks will be coordinated internationally to avoid fragmentation. Mechanisms for aligning and updating technical standards (NIST, ISO) in a timely manner as AI capabilities evolve.
Suggested compromises
Partial alignment on priority issue areas rather than full global consensus. Coalition building around trusted, limited‑scope mechanisms that can later be scaled via multilateral institutions. Adopting open standards that allow shared core infrastructure while permitting national fine‑tuning for sovereignty. Pooling compute resources and cloud credits while allowing individual countries to retain control over their own workloads. Balancing open‑source contributions with local adaptation to meet cultural and regulatory needs.
Thought Provoking Comments
Global consensus on how to govern AI is a no‑go. However, partial alignment on priority issue areas is possible, and we should focus on building coalitions that can later be scaled through multilateral formats.
She challenges the optimistic narrative of universal AI governance, reframing the problem from seeking impossible global consensus to pragmatic coalition‑building, which sets a realistic tone for the discussion.
Her comment shifted the conversation from abstract geopolitics to concrete, actionable steps. It prompted Sabina to acknowledge coalition‑building as the best hope, and opened space for other panelists to propose specific mechanisms (e.g., open‑source models, compute consortia).
Speaker: Bella Wilkinson
The AI divide will be much bigger than the digital divide because it is about agency, not just access. Compute, data quality, power, connectivity and skills are layered barriers that disproportionately disadvantage smaller economies.
He expands the discussion from high‑level governance to the concrete, multi‑dimensional infrastructure gaps that developing countries face, highlighting why mere access to the internet is insufficient for AI participation.
His detailed enumeration of barriers deepened the analysis and gave the panel a concrete problem set to address. It led to follow‑up suggestions from Rafik about shared infrastructure and from Halak about standards and shared practices.
Speaker: Rajesh Nambia
Every computer runs Linux; the Linux model shows how a common code base can be contributed to by anyone while allowing sovereign fine‑tuning. We need an equivalent ‘LAMP‑stack’ for AI – open standards and interfaces that let each country build on a shared core.
He introduces a powerful analogy from open‑source software to AI governance, proposing a concrete architectural vision for collaborative yet sovereign AI development.
This analogy sparked a thematic thread on open‑source and modularity that recurred throughout the panel. It inspired Bella to cite the Southeast Asian multilingual LLM example and prompted Rafik later to discuss data trusts and federated learning as practical implementations.
Speaker: Rafik Rikorian
Technical standards (e.g., NIST, ISO) are flexible, evolving, and can prevent smaller companies from being priced out. International, evolving standards combined with industry coalitions can enable shared risk‑mitigation practices.
She identifies a tangible lever—standardisation—that can bridge the gap between diverse regulatory regimes and foster inclusive participation, moving the conversation from abstract governance to actionable policy tools.
Her focus on standards gave the panel a concrete area of convergence, leading Sabina to link it back to Bella’s coalition idea and prompting further discussion on interoperability and shared resources.
Speaker: Halak Shirastava
Mozilla’s Data Collaborative aims to create a marketplace of ethically sourced, provenance‑tracked data sets, giving data owners (e.g., radio stations) compensation and control, while providing clean data for model training.
He presents a concrete, innovative model for data sharing that addresses both ethical concerns and the data scarcity faced by developing regions, illustrating how open‑source principles can be operationalised.
This example grounded the earlier abstract talk of data trusts, leading Bella to reference it when discussing multilingual LLMs and prompting further interest in federated learning as a complementary approach.
Speaker: Rafik Rikorian
Capacity building isn’t just workshops; it requires shared evidence, procurement policy coalitions, and open‑source adoption to avoid billions of dollars wasted on proprietary models.
She expands the notion of capacity building beyond training, highlighting systemic levers (evidence sharing, procurement) that can accelerate adoption in emerging economies.
Her points redirected the conversation toward practical mechanisms for scaling AI in low‑resource settings, reinforcing Rajesh’s earlier emphasis on innovation‑first approaches and influencing the final round of discussion about talent and sector‑specific governance.
Speaker: Halak Shirastava
Countries should lead with an innovation‑first mindset; sector‑specific (healthcare, finance) governance is more meaningful than blanket horizontal rules, and we need talent that understands both technology and sectoral harms.
He challenges the typical regulatory‑first narrative, arguing for a nuanced, sector‑focused approach that balances innovation with safety, adding depth to the policy discussion.
This comment prompted Sabina to acknowledge the need for sector‑specific solutions and led to a brief but pointed exchange on talent gaps, reinforcing the panel’s consensus on the importance of building local expertise.
Speaker: Rajesh Nambia
Overall Assessment

The discussion pivoted from an initial, high‑level framing of AI governance to a grounded, solution‑oriented dialogue thanks to a handful of incisive remarks. Bella’s realistic appraisal of global consensus set a pragmatic baseline, while Rajesh’s exposition of the multi‑layered AI divide supplied the concrete challenges that needed addressing. Rafik’s open‑source analogies and data‑collaborative proposal, together with Halak’s focus on evolving technical standards and systemic capacity‑building, supplied actionable pathways for coalition‑building and shared infrastructure. Subsequent comments on innovation‑first, sectoral governance, and talent development deepened the conversation, steering it toward implementable policies for emerging economies. Collectively, these key comments reshaped the tone from speculative to constructive, aligning the panel around tangible mechanisms—open standards, data trusts, federated learning, and procurement coalitions—to bridge the AI divide.

Follow-up Questions
What messaging can drive coalition building in AI governance in the absence of trusted institutions and shared values?
Identifying effective communication strategies is crucial to foster trust and cooperation among competing nations and stakeholders.
Speaker: Bella Wilkinson
How can low‑hanging governance alignment (e.g., shared data governance, pooled compute) be operationalised for resource‑constrained countries?
Practical steps are needed for developing nations to benefit from coalition‑building without excessive cost or complexity.
Speaker: Bella Wilkinson
What concrete examples of shared standards, pooled resources, or public‑private models exist that could be replicated for smaller or developing economies?
Real‑world models would guide policy makers and practitioners in implementing collaborative AI initiatives.
Speaker: Rajesh Nambia
How can an open‑source, LAMP‑style stack be translated into AI to provide digital sovereignty, interoperability, and flexibility for nations?
Open‑source approaches could democratise AI infrastructure, allowing countries to customize while contributing to a common core.
Speaker: Rafik Rikorian
How can technical standards such as NIST and ISO be aligned across jurisdictions to reduce compliance costs for startups and smaller firms?
Harmonised standards would lower barriers to market entry and promote equitable participation in AI development.
Speaker: Halak Shirastava
How can shared risk‑mitigation practices (e.g., misuse evaluations, red‑team reports) be coordinated internationally?
Collective safety assessments can improve trust, reduce duplication of effort, and enhance global AI security.
Speaker: Halak Shirastava
How can interoperability of shared resources (datasets, benchmarks, evaluation tools) be achieved across large tech companies and startups?
Interoperability enables broader participation, fair competition, and faster progress in AI research and deployment.
Speaker: Halak Shirastava
How can federated learning architectures be leveraged for cross‑border collaboration while preserving data sovereignty?
Federated learning allows joint model training without moving raw data, addressing privacy and sovereignty concerns.
Speaker: Rafik Rikorian
What models of data trusts (e.g., indigenous data collectives, Mozilla Data Collaborative) can be scaled globally for ethical data sharing and compensation?
Data trusts provide provenance, licensing, and monetisation mechanisms essential for fair and responsible AI data use.
Speaker: Rafik Rikorian
What procurement policy frameworks could be established through an industry coalition to open markets for emerging economies?
Standardised procurement rules can facilitate adoption of open‑source AI solutions and stimulate local ecosystems.
Speaker: Halak Shirastava
How should capacity‑building be structured beyond workshops—e.g., through shared evidence, documentation, benchmarks—to effectively uplift emerging economies?
Tangible resources and shared knowledge are needed for sustainable capacity development in AI governance.
Speaker: Halak Shirastava
How can sector‑specific governance (healthcare, finance, climate, etc.) be developed to address distinct harms and regulatory needs?
Sectoral approaches may be more effective than generic rules, ensuring relevant safeguards for each domain.
Speaker: Rajesh Nambia
What strategies can address talent gaps in AI governance within governments of developing countries?
Building skilled personnel is essential for implementing and regulating AI responsibly at the national level.
Speaker: Rajesh Nambia
What are the implications and lessons from the Southeast Asian Languages Under One Network multilingual LLM model for collaborative AI development and governance?
The model illustrates how open‑source fine‑tuning and cross‑border collaboration can produce culturally relevant AI services.
Speaker: Bella Wilkinson (addressed to Rafik Rikorian)
How can cross‑border cooperation be facilitated given institutional capacity constraints in developing nations?
Institutional capacity is a bottleneck; identifying mechanisms to strengthen it is key for effective AI adoption.
Speaker: Sabina Chofu (to Bella)
What developments in AI governance standards and bodies (ITU, ISO, etc.) are expected over the next 12‑18 months?
Anticipating near‑term progress helps stakeholders plan actions and align with emerging frameworks.
Speaker: Halak Shirastava
Is the current pace of transparency and accountability (e.g., 30‑year lag for certain files) acceptable, or are we resigned to systemic delays?
Raises concern about the speed of governance processes and the need for more timely accountability mechanisms.
Speaker: Audience

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Press Briefing by HMIT Ashwani Vaishnav on AI Impact Summit 2026 l Day 5

Press Briefing by HMIT Ashwani Vaishnav on AI Impact Summit 2026 l Day 5

Session at a glanceSummary, keypoints, and speakers overview

Summary

The AI Impact Summit 2026, chaired by Minister Ashwini Vaishnaw, brought together virtually every major AI player, numerous startups and a massive student audience, showcasing the “phenomenal” quality of dialogue and the Indian Prime Minister’s “Manav AI” vision [1-4][5-6][9-10]. Over 2.5 lakh students participated, earning a Guinness World Record, while investment pledges exceeded $250 billion for infrastructure and $20 billion for deep-tech venture capital [11]. The summit’s declaration attracted more than 70 signatories, with expectations to surpass 80, and Vaishnaw asserted that all important AI nations have already signed [14-19][20-22].


Vaishnaw highlighted that the first phase of India’s AI Mission has already outperformed its targets, deploying 38,000 GPUs against a 10,000-GPU goal and delivering a bouquet of twelve multimodal foundational models and twelve AI-safety institutes [127-134]. He announced the launch of AI Mission 2.0, which will raise the bar on models, common compute and safety, and noted that numerous MOUs and collaborations signed during the summit constitute “real action” beyond the draft declaration [121-126]. The government also pledged to lay the foundation for a large semiconductor plant in Uttar Pradesh and to deepen the “Pax Silica” semiconductor supply-chain partnership, positioning India as a trusted global partner [44-45][163-165].


When asked about the voluntary nature of the frontier-AI commitment and the non-binding Delhi Declaration, Vaishnaw emphasized that extensive bilateral MOUs and consensus among major AI firms provide concrete implementation pathways [122-124][158-162]. He affirmed that the Synthetic-Generated-Content (SGI) regulations have been internationally accepted, with several countries seeking to align their data-protection frameworks with India’s model [230-238][242-244]. Emphasizing inclusive growth, he reiterated the goal of diffusing AI benefits to the “last person” in society, linking this to broader government programmes such as Jan Dhan and Swachh Bharat [144-152][155-156].


The minister concluded that the summit’s scale-over 100 countries, 45 ministerial delegations and 20 world leaders-demonstrates global confidence in India’s role in the emerging AI age [362-368]. He thanked the media, security agencies and all participants, and indicated that the forthcoming declaration will detail the agreed contours and next steps for the AI ecosystem [346-349][357-361]. Overall, the discussion underscored India’s ambition to lead AI development through sovereign models, robust governance and widespread stakeholder engagement, marking a pivotal moment in the nation’s AI trajectory [1].


Keypoints


Major discussion points


Scale and international impact of the AI summit – The summit attracted “practically every major AI player in the world” and showcased thousands of startups, setting a Guinness World Record for student involvement and securing over $250 billion in infrastructure and $20 billion in VC investments [10-12]. A record number of countries signed the declaration, rising from 60 to over 80 signatories, with “all the major countries” already on board [14-19]. The event also featured high-level diplomatic participation, with 45 ministerial delegations and representatives from 100 countries [362-368].


Progress of India’s AI Mission and roadmap for Mission 2.0 – The minister highlighted that the original targets of the AI Mission 1.0 have been met or exceeded: 38,000 GPUs deployed (goal was 10,000) and a portfolio of 12 multimodal foundational models built with limited resources [126-132]. Twelve AI-safety institutes are now operating, and the government is preparing a larger “AI Mission 2.0” to expand compute, models, and safety frameworks [120-134][125-128].


Commitment to responsible, ethical AI and emerging guardrails – Questions about global guidelines were answered by stressing that the “frontier AI commitment” and the forthcoming “Delhi Declaration” are voluntary but backed by consensus among major AI players [88-90][158-162]. The Synthetic-Generated-Content (SGI) framework has been accepted internationally, and India is advancing a strong data-protection regime that other countries are looking to emulate [228-236][242-244].


Development of a domestic semiconductor and compute ecosystem – The summit announced the laying of a new semiconductor plant in Uttar Pradesh and the upcoming commercial launch of a massive Micron facility [45]. The “Pax Silica” initiative was highlighted as a cornerstone for building a trusted, resilient semiconductor supply chain, with India positioned as a preferred partner for global chip designers [325-334][326-329].


Focus on diffusion, inclusive growth, and Global-South participation – The minister repeatedly emphasized that AI benefits must reach “the last person” and cited India’s broader inclusive-growth programmes (Jan Dhan, Swachh Bharat, etc.) [144-152][155-158]. Significant representation from Global-South nations was noted, with a commitment to reflect their priorities in the joint declaration and to foster South-South collaboration [308-315]. Plans for AI education at school level were also mentioned, aiming to train millions of children [245-247].


Overall purpose / goal of the discussion


The session was designed to showcase the success of the AI Impact Summit, demonstrate India’s leadership in AI research, policy, and industry, and to communicate concrete follow-up actions – from investment pledges and the next-phase AI Mission to international agreements on responsible AI and the building of a domestic semiconductor ecosystem. It sought to reassure domestic and global stakeholders that India is a trusted partner for AI development, ethical governance, and inclusive diffusion.


Overall tone and its evolution


The tone began highly celebratory and proud, highlighting achievements, records, and international praise. As the Q&A progressed, the tone shifted to defensive and explanatory, addressing concerns about the binding nature of commitments, data protection, and implementation details. Throughout, the minister maintained an optimistic and forward-looking stance, concluding with gratitude toward partners and a reaffirmation of inclusive, collaborative growth. The progression moved from exuberant celebration to measured reassurance while retaining an overall positive and confident demeanor.


Speakers

Ashwini Vaishnaw – Role/Title: Honorable Minister for Electronics and Information Technology, Government of India; Areas of expertise: Electronics, Information Technology, AI policy, semiconductor industry [S1][S2][S3]


Audience – Role/Title: Various journalists, analysts, and members of the public; Areas of expertise: (not specified)


Randhir Jaiswal – Role/Title: Official, Ministry of External Affairs, Government of India; Areas of expertise: International relations, diplomatic engagement [S7]


Speaker 4 – Role/Title: Participant (questioner); Areas of expertise: (not specified)


Speaker 1 – Role/Title: Moderator/Host of the AI Impact Summit session; Areas of expertise: (not specified)


Additional speakers:


– None identified beyond the speakers listed above.


Full session reportComprehensive analysis and detailed insights

Opening & Vision – Minister Ashwini Vaishnaw opened the AI Impact Summit 2026 by welcoming participants and presenting Prime Minister Narendra Modi’s “Manav AI” vision – AI of the humans, by the humans, for the humans – stressing responsible and ethical AI. He highlighted the unprecedented youth presence (over 2.5 lakh students, a Guinness World Record) and noted that, despite attempts by the Congress party to disrupt the event, the exhibition “belongs to the youth” and was embraced enthusiastically [9-10][68-71]. He also invoked “Vixit Bharat”, calling for the participation of all 1.4 billion citizens [144-152].


Scale & Participation – The summit drew delegations from roughly 100 countries, 45 ministerial teams and 20 world leaders, figures reported by MEA Secretary Randhir Jaiswal [362-368]. Over 60 start-ups showcased innovations, and the Delhi Declaration already had more than 70 signatories, with the minister saying the expectation is to exceed 80 by the close of the event [14-19][20-22]. Investment pledges surpassed $250 billion for infrastructure and about $20 billion for deep-tech venture capital [11-12].


AI-Mission 1.0 Achievements – The first phase of India’s AI Mission now operates 38 000 GPUs, with an additional 20 000 slated for launch, bringing total capacity to roughly 58 000 [127-129][128-130]. A “bouquet” of twelve multimodal foundational models has been announced, of which three are already released and the remaining nine are scheduled for later in the year [131-134][132-134]. Twelve AI-safety institutes function in a networked mode, underscoring progress on governance alongside capability [120-134].


Policy & Infrastructure Announcements – The Synthetic-Generated-Content (SGI) amendments were presented as internationally accepted, with three countries expressing interest in aligning their data-protection frameworks with India’s [230-238][242-244]. The minister announced the laying of the foundation for a new semiconductor plant in Uttar Pradesh and the imminent commercial launch of a massive Micron facility described as “more than 10 cricket fields” in size [44-46][45]. The “Pax Silica” initiative was highlighted as a cornerstone for building a trusted, resilient semiconductor supply chain [325-334][326-329][68-70]. References were made to inclusive-growth programmes such as Jan Dhan, Swachh Bharat and Hargarh Nal Sejan [144-152][155-158].


AI-Mission 2.0 Outlook – Vaishnaw outlined Mission 2.0, which will expand GPU capacity to about 58 000, increase the portfolio of foundational models, and deepen the safety-institute network, positioning it “definitely bigger than AI-Mission 1.0” [121-126][125-128][129-134]. He also hinted at possible viability-gap funding for socially valuable AI projects [268-277][122-124].


Q&A Highlights – Journalists queried (a) the binding nature of the Delhi Declaration and international guardrails for responsible AI [68-71][88-90]; (b) enforcement of voluntary frontier-AI commitments [88-90]; (c) big-tech participation in public-service AI and related MOUs [262-264]; (d) the high cost of compute and chips and potential government support [268-277]; (e) the timeline for AGI, to which the minister gave a non-committal response and referred to Mission 2.0 [250-252][121-125]; (f) the next summit location (Switzerland) and prospects for additional editions [223-226]; (g) data-privacy concerns about OpenAI/ChatGPT and India’s data-protection framework, noting three countries seeking alignment [230-238][242-244]; (h) the overhaul of TRP guidelines for traditional media [223-226]; (i) Global-South priorities and the African Union’s role [362-368]; and (j) the legal framework for AI-related cyber-crime discussed in the session [223-226].


Closing Remarks – Vaishnaw thanked the media, the Ministry of External Affairs and the Delhi Police for their “tireless” effort [31-33][357-361]. Randhir Jaiswal concluded by acknowledging the diplomatic success of the summit and the forthcoming release of the Delhi Declaration text, which will detail the agreed-upon points [362-368].


Overall, the summit combined celebration of scale with forward-looking commitments to translate diplomatic goodwill, investment pledges and technical milestones into a sustainable, inclusive AI ecosystem for India and the broader Global South [362-368][325-334].


Session transcriptComplete transcript of the session
Ashwini Vaishnaw

from the world. We had practically every major AI player in the world participating in large numbers. We had so many startups getting the opportunity to showcase their work. Overall, the quality of discussion was phenomenal. If you look at the ministerial dialogue, the leaders plenary, the main inauguration function, the summit, the quality of participation, the quality of dialogue was phenomenal. Honorable Prime Minister Narendra Modi’s vision of Manav AI, which is AI of the humans, by the humans, for the humans. I think that was very well accepted practically every major AI player in the world. The ministerial, which we had the bilateral, which… I and my colleague Sri Jitinji had. Everywhere, practically every minister resonated with this and everybody felt happy that we have brought the discussion about responsible and ethical AI to the forefront by involving two and a half lakh students in this entire journey.

We had a Guinness World Record for that involvement of the students. We also have a lot of investment pledges I was just asking Abhishekji and Krishnanji I think the number is growing each day so it’s already crossed 250 billion dollars for the infra -related investments and about 20 billion dollars for the VC deep tech investments which have been committed by investors. This is a very important sign for us. the numbers are important but what is important is the world has confidence on India’s role in the new AI age. That’s very very important for all of us because as we have seen there is always a always a need to bring out the talent that we have, bring out the energy that we have in front of the world so that the world recognizes that.

I would also like to share with you that the action summit, the previous one had about 60 signatories in the final declaration. We have already crossed 70. There are many ministers who are here and they are discussing with us. So I think by the time we close the summit tomorrow, we have, as you know, we have extended it by one more day. We believe that it will cross 80. All the major countries have already signed. If you feel that somebody has not signed, you need not speculate on that. All the important AI, people who matter in AI, they have all signed. We will be giving you the formal number tomorrow once the summit closes. That is the way it should be done.

That’s the right way of doing things. We also had many interesting episodes where I met a very young innovator today morning and even in the previous couple of days back also. Some very young people have done so much work in AI, which was very, very encouraging. Because if the youth sees that hope in this new world, the youth has that positivity. about this technology. We also found very strong endorsement of our policy of working on all the five layers and our focus on having a sovereign bouquet of models. The models which were released, I tell you, in every bilateral that I had with the industry leaders, they are really surprised at the quality of output with such few resources, with such the kind of resources which some of the frontier labs have at their disposal, and with such frugal resources our engineers and researchers have produced such good models which is what gives huge, huge endorsement to our efforts.

I would also like to thank all the team members. All the stakeholders, right from media, from the organizers, from ITPO, and special thanks to the MEA and Ministry of Home Affairs, Delhi Police. for so much effort. They have tirelessly put in making this a grand success. Thank you everybody who participated in this. And also thanks to the youth who endorsed this, who took this so positively that whatever little effort that Congress made for trying to disrupt the summit was really, really, I mean, the youth very clearly said that this is their exhibition. It is the exhibition. This is the summit for the youth who want to make the best use of it. They don’t believe in the negative politics that Congress was trying to play.

We had some bad choices here, people coming into the exhibition, and we took immediate action against anybody who tried to demean the… demean the good work that is being done by our startups and by engineers, by our people who are working in the AI field. That is something that we are a very open -minded government. We believe in taking your feedback. We believe in working with you. We believe in the goal of Vixit Bharat, and that’s why we would like to tirelessly work with you for this goal, which our Prime Minister has given that vision for our entire country, and we have to do it together. This has to be done by all the 140 crore of our citizens who believe in this common goal of Vixit Bharat, and these are steps in that direction.

Friends, tomorrow we will be also laying the foundation for our next… …semiconductor plant here in Uttar Pradesh. I’ll invite all of you to join that ceremony also and on 28th we’ll start the commercial production from Micron facility and that will be one of the largest facilities that Micron has practically more than 10 cricket fields kind of facility it’s very large and that is going to be inaugurated on 28th so all these are steps very methodical step by step moving in the direction for creating that foundation which our Prime Minister is laying for the young generation for Vixit Bharat for all of you who are watching it on TV or social media our Prime Minister Shri Narendra Modi Ji is laying the foundation for the country, which will be a developed nation by 2047.

I’ll take questions, and like in the past, we’ll follow the first row,

Speaker 1

Thank you, sir. First of all, please identify yourself and your organization’s name before asking the question. And as sir has said, start from the left. Yes, please.

Audience

Hi, sir. I am Nishant Ketu from ANI. My question is, how do you see that India’s role in… useful tools for day -to -day Python development and Python work? So we have developed a program which is like absolutely beginner… Even if you have zero knowledge in Python, you can join this program. And in 30 days, you will be becoming pro of Python. That also using AI. You will be not becoming a Python developer. You will be becoming a 10x Python developer. And the best part, like we have democratized this program. So this 30 -day program is just for Rs .199. Can you believe that? Prime Minister began this on the 16th when this program began. And today where we are.

What is the observation Prime Minister has given to you or indication? He has given to you. Sorry? Observation or indication that he has given to you about AI Impact Summit.

Speaker 1

Next. Please.

Audience

Hi, this is Deepak Ajwani from Economic Times Digital Team. I have one simple ask. Have there been certain guidelines, guardrails that have been put together by all the countries that got represented yesterday on the stage? on effective, ethical, and responsible use of AI. Is there a paper that you can bring about maybe tomorrow where all of you have agreed that this is at least the first set of blueprint which can be iterative later? Thank you. Hi, sir. Shauvik from Mint. Sir, two questions. One is on the participation from big tech companies. Have there been conversations? The global tech companies. Have there been conversations with global tech companies in terms of the role that they will play in India as far as public services are concerned?

Because each of them spoke about AI and its role in public services. And secondly, the models that were launched under the AI mission, they’ve also been backed. Is there a takeaway from the summit in terms of where they go from here? Thank you. So, Oyeek from Money Control. So, just wanted to ask you, yesterday you had, you had the frontier AI commitments. So, the declaration will also come tomorrow. So the frontier AI commitment is voluntary in nature. The Delhi Declaration, I’m assuming, is non -binding. So how do we ensure that this does not remain on paper? How do we ensure implementation?

Ashwini Vaishnaw

Can you repeat your question?

Audience

So I’m saying the frontier AI commitment is voluntary in nature, and the Delhi Declaration, whenever it comes, is non -binding. So how do we ensure that this does not remain on paper? Like the declarations, commitments made in the…

Speaker 1

Anybody else on front row? Anyone? Okay, please.

Audience

Sir, Ashish from Business Standard. All of the three previous summits had a focus area when it came to the declarations. If you could share just one line on what would be our focus area when the declaration is signed.

Speaker 1

Please. You are close? Yeah.

Audience

Hi, sir. Shubhan from the Economic Times. I understand that the declaration will be coming tomorrow, and as you mentioned, some 80 -odd countries will maybe… The list may be as high as… Now it’s 70, maybe up to 80. I just wonder… I wanted to understand, since some of the last… Some of the previous summits have seen significant difference of opinion in India, what were some of the areas where it was relatively easier to build consensus? and if possible, what were some of the areas where it took a little bit of time?

Speaker 1

Next. Last.

Audience

Hi, sir. We look AI -ready globally, but my question would be for the last person standing in India, how far and how long it will take to reach to that one last person in India? How long will it take for AI to reach there? Very good question. Hi, sir. This is Lalit from Best Media Info. My question is, we have been seeing that traditional media sectors like TV, radio, print, they have been fighting for ad ex, for advertising revenue, while digital is scaling up. Is there any way that AI or any policy can actually help bring balance in this revenue share of advertisement? My second question would be, there has been a long -pending TRP guidelines overhaul that was formulated.

Normally, you know, it was meant to bring multi -agency system into the picture and removal of landing pages. We just want to know where or in what stage that guidelines are in, and can we expect the guidelines coming in anytime soon? Sir, I am Prashant from AsiaNet News. There were very good sessions in this summit. How do you wish to take down to the grassroots level how these sessions can help the lives of common man?

Ashwini Vaishnaw

So, there are questions about where do we go from here? What will be the implementation? I’ll take all these questions one by one. I think, friends, the journey so far has been very meaningful, very methodical, starting from building the base and working through all the layers of the IC, and creating that foundational level of work. and now getting the entire world to come here, deliberate, interact with our industry. Now we’ll take the next level of our AI mission where we will be focusing and taking to a totally new level of models, a new level of common compute, new level of safety. We have so many collaborations agreed in the last few days, which is where I would like to address that point about paper versus real action.

Yes, there is lots and lots of real action, real MOUs, real understanding, which has happened in the past few days, where many of these things which concern us as well as the entire world will be working in a very collaborative manner. That is the… That is the real action which will come out of it. We will be very soon start working on the AI mission 2 .0, which will be definitely bigger than what it was in the AI mission 1 .0. Many of the goals we had set for ourselves in the mission 1 .0 are on the verge of getting completed and many of them have actually exceeded. We wanted about 10 ,000 GPUs. We have 38 ,000 already and another 20 ,000 very soon going to be launched.

We have foundational models. We were looking at two foundational models. We have a bouquet of 12 models and very multimodal, reasonably, I mean very well rated. We wanted to have an AI safety institute. We have now 12 institutes working on this in a network mode. So all these goals that we had set for ourselves are getting implemented very rapidly. So now we have to set bigger goals. And achieve them as a part of the AI mission 2 .0. our Honourable Prime Minister has always led from the front the vision Manav AI that he gave yesterday is something which everybody resonated everybody accepted in the ministerial dialogue, in the bilaterals everywhere people thought that first time they have heard a vision which is so compelling and it just cuts across every civilization every country this is meaningful for everybody every generation, every sector every country because ultimately it’s the humanity which matters the most and that’s why this vision resonated with everybody big tech participated very much in this same the participation of the startups and young innovators it was very good participation there is huge consensus on the declaration we just want to maximize the numbers we in India should be we are not going to be reading that effort.

It’s so natural to do it. And given the size of this summit which has happened, it’s natural to set a number so that the record is always here. So that’s why we are trying to maximize that in a very… In fact, Abhishek… Do a little more work. He was thinking he would take a day off. But no, he’s not going to get a day off. So do a little more work. Very important question which came is about diffusion, about the last person. How do we see the benefit? If you go there… rich countries, you will find that 5G is very patchy. It’s not very the way it is in our country. Vaisahi effort isme bhi lagayenge.

Aur isme bahut mehnat karni padhegi. And we are prepared to put that hard work, put that effort. Our Prime Minister keeps inspiring us that for us, we should not stop till the benefit reaches the last person in the society. That is our goal always. And we in BJP have always had this basic tenet as Antiyode. We believe in inclusive growth. And if you look at Honourable Prime Minister’s programs, each and every program, whether it is Jan Dhan, whether it is the Swachh Bharat, whether it is construction of toilets, whether it is Hargarh Nal Sejan, each and every program has been created and executed to bring the benefits. Thank you. We believe in inclusive growth as a basic political philosophy, and that’s why here also that same political philosophy will be reflected.

I have absolutely no doubt about it because this is a family in which inclusive growth is one of the most important tenets of our thought process. There are questions about guardrails. You might have seen the first major, first time all the big AI players came on the same stage and agreed. Voluntary is more like saying it, but of course we have discussed with them, and all of them have come to this consensus. Taking those first steps was very, very important, and I don’t want to… exaggerate this but if you ask any major policy leader in the world and I had so many meetings today each and everyone is surprised how we could pull together the entire AI industry coming forward and started openly.

Major major achievement and this kind of achievement shows how India can be leading the thought process. We also had Pax Silica today which is very important for us from the semiconductor industry perspective from resilient supply chain resilient value chain perspective and the fact that we are today seen whether in Europe or in Australia or in US or in Southeast Asia everywhere we are seen as a trusted country that itself speaks a lot about how our Prime Minister has conducted the foreign policy how our Prime Minister has developed that trust among the entire global every sector every moment every geography every part of the world we will Youth Congress I have already responded

Speaker 1

next room just a second second room please we will come one by one yes please

Ashwini Vaishnaw

MIBC related questions I will answer later today we will talk about AI mission we will talk about AI mission next time

Audience

Namaskar Sir Sir Sir Thank you. Thank you. Thank you. These are my two questions.

Speaker 1

Anybody else?

Audience

Sejal Sharma from Hindustan Times.

Speaker 1

Just a second. Yes, please. Who is asking? Second row? Yeah, please.

Audience

Congratulations on the declaration, sir. I just wanted to know, could you give us names of some of the countries that have signed the declaration already? Just a few. Not proper. Good evening, sir. I’m an independent journalist. I’m an independent journalist. So first, I want to know what are the outcomes from each of the seven working groups that were formed before the summit. Second, before the summit began, the Indian government focused on how India will lead the global south. How has that materialized during the summit? Has it materialized in form of bilateral conversations, any MOUs, any pacts being signed? And number three, today the SGI amendments are supposed to go into effect. We had all the big AI companies and big tech companies of the world here.

Has there been any discussion on that? Because the companies have been fairly critical, both off the record and on the record, about the compliance deadline as well as the three -year takedown window as well as some of the provenance -related specifics of the SGI amendments. Sir, Manas from the Times of India. Sir, has the objective of a technological framework been achieved? And how many countries are on board? And what is the reaction of the big tech? And there is that. And given the representation of the big tech companies, what is the government doing to ensure that we are not going to be the data and talent supplier?

Speaker 1

Last in the second room, I think she’s about more sauce. Thank you.

Audience

Sir, Momita from PTI. I think everybody is curious to know the contours of the New Delhi Declaration. The focus and thrust, you know, rightly for India has been impact. The use cases and how it benefits the public. If you could just give us some color on what the New Delhi Declaration contours look like. Which are the areas where consensus has already been reached, where 70 countries are coming together and supporting those causes. And how would it benefit Indians?

Speaker 1

Third room. Anybody in the third room? Okay. Please.

Audience

Momita from Outlook Business. Thank you. Yes. So I wanted to understand, recently French President mentioned about he urged India to be a part of the social media band for those under the age of 15. So has there been some sort of consensus that you reached with other countries about the same?

Speaker 1

Anybody in third row? Okay. Please.

Audience

Hello, sir. Himanshu Desai from Rajasthan Patrika. Sir, so I wanted to ask, like, what role will…

Ashwini Vaishnaw

Patrika se toh Hindi pe pushna chahiye. Mujhe bhi toh chance mile Hindi me jawab ne nahi ka.

Audience

Ji, bilkul. Sir, toh main yeh pushna chahta… Pachpang se padta ho Patrika. Sir, main yeh pushna chahta ho, jase aaj humne Dr. Mohan Yadav, Madhya Pradesh ke CM ki bhi briefing dekhi. So, matlab, like, state governments ka jo pura ka pura plan rahega, aur kaise government jo hai wo state governments ke saath mil kar kaam karegi? Like, agar hum specially…

Speaker 4

State governments… Bawal chak. Bawal chak. Thank you. Thank you.

Audience

Hi, sir. Yaku Tali from DLU Hindi. Sir, my question to you is, what is the government doing about data protection? Because we are seeing OpenAI, ChatGPT and Microsoft are taking access to all the data. Yesterday, a notification was also sent in which it was said that you can now share your contacts and then reach out to your contacts. So, don’t you think that all the Indians are taking their data?

Speaker 1

Hello, sir. Yes, right side.

Ashwini Vaishnaw

Yes, among them. Anybody else on this thing? Third row? we are working with industry on that visheshkar jo colleges aur schools mein course curriculum banna hai usme industry ke inputs lagataar aare hai bhi aur jaisi wo final hota hai wo aapke saath share bhi karenge semiconductor ka bhi industry ke saath milke kia tha telecom ka bhi industry ke saath milke kia hai to iska bhi industry ke saath milke hi karenge jaisi ki ek relevant practical useful knowledge aasake industry ke liye state governments ko bahut closely isme participate karenge kyunki ultimately janjan tak paunchne ka madhiyam rajya sarkaron ke tarike se hi ho sakta hai sarvam karib karib har benchmark pe bakiyon se kharay utra hai aur khas kar jo open AI se bhi aur deep seek se bhi aur gemini ke pro model se bhi kai benchmark pe wo better hai aap chaho to uske jo unho ne drop kia on all the globally accepted parameters.

The new regulations of SGI have been accepted and everybody has told that this is a necessity of the country and many countries in the world are already talking about bringing regulations in this direction. In fact, many countries have congratulated India and have taken the first step. And in the coming time, many more countries will watermark this. And the main purpose of this is that Is it real content or synthetic content? That transparency is necessary so that you can decide for yourself whether to trust it or not. The second thing is also very important for us to know that the SGI has been accepted by the world. that the law and constitution in the society that is illegal, is also illegal in the online world.

What is illegal in the physical world is also illegal in the online world. It is a very natural constitutional mandate. So I didn’t get anyone who opposed it. If you get someone, then do reach me. The techno -legal framework is growing very fast. I have already given a statement on children’s protection. The data protection framework is very strong. In fact, I don’t want to take names, but three countries have said that they want to make their data protection framework equal to India’s data protection framework. Already, in today’s and tomorrow’s meetings, Aaj kal aur parso 3 din ki meeting mo Already 3 desho ne kaha ki Aapka template bahut achcha hai Isi tare ke kanun ko am bhi banana chahi Mythbusters ki baad Training ke saath hi chudegi Taaki kis tare se Koi bhi point ho, uska benefit kya se Abhi chalain Fourth row Left side se Anybody in fourth row Please

Audience

Namaskar, main Sandeep ho Prabhat Khabar Jharkhand se Jis tarike se AI ko lekar Shor mach raha hai Usme Sabse jada train karne ke Zorat bachcho ko hai aaj ke daur me To kya kisi tarah ka koi module Ye course start kiya jayega Schoolo me ki saath saal Aat saal, das saal ke bachcho ko AI ki training di jayega Is tarah ki koi yojana hai Koi Koi Planning hai

Speaker 1

Next Aage Next

Audience

Thank you, sir. Arundeep from The Hindu. So just one question. You’ve had the opportunity to interact with a lot of leaders in AI and world leaders on a range of subjects. Does the government of India, after this event, believe that AGI is coming in the next two years? What is the government’s position on that, clearly? And if so, are we prepared for that as a country? And I lied, I have another question. Second question is, the next summit is going to be held in Switzerland. But given the response to this edition, is this something that we might do again in the coming future?

Speaker 1

Okay. Ajay.

Audience

Sir, the question is, what India has done in the UBI, what we have done in the UBI, democratization waha par kya? Kya hum koshish karenge AI mein? Aur is democratization ke is open source model ko puri duniya ne kis tarah se liya hai? Wo hum samajhna chahin. Thank you, sir.

Speaker 1

Fourth row, anybody else? Highest, please.

Audience

Hi, good evening, sir. Ashmit from CNBC TV18. Firstly, congratulations on the largest ever AI summit. I had two questions, sir. Amongst the companies that were here, there were also the likes of NVIDIA, AMD. One concern, India is going for a data center build -out, as was evident from the large commitments that we’ve seen. The cost of compute, the cost of chips is something that constantly kept coming up in the conversations. Has that been discussed? And are there any material assurances, gains for India under the PAC silica arrangement? That’s one. Second, you spoke earlier about diffusion. I just want to get a little clarity as a part of the mission 2 .0 that you made a reference to.

A lot of these applications for AI for social purposes are the ROI may not be immediately available for the developer. In such a case, is the government willing to step in under Mission 2 .0 under some form of support or viability gap funding? Okay. Can I ask? Okay, I’m Surabhi from the Economic Times, sir. Two questions. One is I wanted to understand from you that when we launched the first version of the India AI mission from that time to now, I think a lot has changed in the AI ecosystem. So what are going to be the main focus areas of the next phase of the AI mission? Secondly, I know you want to talk about the declaration tomorrow and not today, but I wanted to understand that you’ve had meetings with the biggest names of AI as far as AI leaders are concerned.

What are some of the things, discussion points that have come up? What have been some of the asks that they have made to you and you have made to them as far as their contributions to India are concerned?

Speaker 1

Mr. Roshan, no matter how many questions you ask, once you have answered them, fifth row, anybody? It’s less than two minutes.

Audience

Good evening, Mr. Minister. This is Arunodai Mukherjee from the BBC. I just wanted to understand and draw your attention to the U .S. delegation, which was here earlier today. They have very strongly rejected calls for global governance in AI. I wanted your response to that. And doesn’t that go against… what this entire summit was all about, charting a path which is a unified path towards global governance. How would you respond to that?

Speaker 1

Yes. Thanks. Amrit Pal.

Audience

Minister, this is Amrit Pal from DD India. The IMF chief today said that while AI could lift global growth by a percentage point and help India achieve… What is… How is the government preparing to deal with that? My question to you is, in the face of rising deep fakes and sophisticated artificial intelligence misleading information, how does the government ensure the accountability without touching ease for doing the business for startups?

Speaker 1

Backside. Brahma Prakash. Brahma Prakash, the way Zee News say. Thank you. Next. Yes, please.

Audience

So my question is related to the declaration. I do understand that you want to talk about it tomorrow, but if you could throw some light on whether there is some sort of consensus on demarking high -risk AI, or will that be left to national governments to decide and demark it? Thank you.

Speaker 1

Yes, please, on the left side. Please pass on.

Audience

Sir, my question is in regards to the Global South. Since this was the first summit to be held in a Global South country, we saw significant representation. Africa, there was an Africa air village. So my question is, can we see, you know, Global South priorities in terms of how AI should be developed? Will they be reflected in the joint statement? And what, according to you, are the major takeaways for the broader Global South? And since Prime Minister Modi also, you know, he championed the inclusion of African Union during the G20 summit. Thank you.

Speaker 1

Anyone else? Yes, please. Back side, one person is left.

Audience

Hi, sir. Jatin Grover from Mint. A couple of questions. I wanted to understand any discussions with the participating nations, maybe to create a G20 -like group, so that help us creating some sort of a binding agreements with the nations on the AI declarations. that’s one and till a few minutes back at the ATL conference you talked about having a legal framework to address the cyber crime basically arising from AI can you please elaborate more on that what kind of legal framework the government is looking at thank you

Speaker 1

anybody else wants question otherwise we are closing ok ok

Audience

hi sir from the economic times this is about the 12 foundation models that the India AI mission is backing we launched 3 of them do you have visibility on when the rest of the 9 will be launched and also have you finalized the terms of the agreements with these companies on how much the government of India will be getting in terms of equity etc

Speaker 1

last question at the back side

Audience

good evening sir I am Shreyas Bharadwaj from IIM Indore and IIT Indore I am an independent journalist but also a student of masterclasses of science and data science and management thank you for letting me speak so my question would be very open to any questions what has the government learnt in two aspects one, itte bade viman ko chalane ke liye bohat kuch challenges aayonge AI impact summit 2026 se government ne sapse zyada kya learning sikiye as a learner as a lifelong learner number two, tech me government ne sapse badi sikya liye is poore summit se that’s my question, thank you sir

Ashwini Vaishnaw

ok, thank you very much UPSC me se kam the questions ma sabse pehle packed silica lunga that’s very important because see we are trying to create the complete ecosystem develop the complete ecosystem of semiconductor industry in our country to get the ecosystem it’s very important that all the major players, the major countries where the ecosystem currently resides should also support and encourage our journey That’s why it’s very, very important that we had the packed silica sign today. From all the discussions that we had, it very clearly emerged that the world looks at India as clearly a trusted partner for semiconductor supply chain, which means the way semiconductor industry will grow in our country in the coming years, that looks like a very important, it will emerge as a major sector.

It’s a very important sector that is very visible. Very clearly, it was evident from the discussions. Same thing will apply to… Do you know in 2026, the highest -paid people in industry are not MBAs or fancy degree holders. They are agentic… By now, I recall two meetings in which people are looking at reducing, power consumption at least 50%. and reducing cost significantly. Sometimes even some people even said that a fraction of the cost of the current chips. So that kind of innovation is happening. And India will be a big beneficiary of that innovation because we are starting our design and semiconductor journey at a point where we can use all the benefits that we know about AI and optimize our design of chips according to the new age.

We are not bound by the legacy of the past. We can actually make a new beginning, which is what we have challenged our startups in the Semicon 2 .0 where we want to have a series of deep tech startups designing chips. I’ve spoken about the next steps. I’ve spoken about education, democratization, diffusion, ROI. Yes, I believe that ROI will come from the application. Most of the enterprise… use cases which are visible here in large number. I think I read one story from one of the digital versions of one of the big channels where this point was also very clearly brought out that while people are mostly focusing on the consumer facing applications but the large number of enterprise solution providers who are participating in this exhibition, that’s very important both from the jobs perspective, from the IT industry’s health perspective and from the direction that India will be taking as a major player in the AI world going forward.

Yes, we have a comprehensive plan. Every sector, as we have maintained right from day one, every sector will be benefited by this. On cyber security, so many sessions have happened. We just inaugurated one research institute between Zscaler and Airtel, and many more such initiatives are going to happen in the coming time frame. The declaration will, when the text comes out, you’ll be able to see the contours of the declaration. Global South, of course, participated in large numbers and very interested in collaborating with us, and that level of trust is there. When the next models will be launched, we’ll keep sharing with you as we progress on that. We had committed one, we have done three.

So it’s like a good, it’s a journey which we’ll keep sharing with you. Learnings, many. One was very surprising learnings that when so many good things are happening, how one small thing can be highlighted so much, it’s a personal learning for me. It was also a learning for me that… It’s a learning for me. people who are in politics, they don’t even, some of the people are opposition, they don’t even understand what the youth today wants and they try to create things which really, I mean, it’s really sad in a way and funny in another way and unko kaun samja sakta hai, I don’t know. Many learnings are there. Here, we’ll use these learnings to improve all the future and this was a very large scale.

As I said, already five lakh plus visitors have already, we were just doing the estimate, I think actual number is about six, but we are just being very conservative, everything which is measured is what we would like to share with you. That kind of participation is there and in the end, I’d like to request MEA to because they have been very important partner for us. your role has been stellar. Delhi Police also I would like to thank. All the security participants who were present throughout this and all the friends of media, you played a very constructive role. A big round of applause for the media. Thank you, friends. Kandeer.

Randhir Jaiswal

Thank you, sir. It has been a pleasure for us in the Ministry of External Affairs to work along with METI as Team India to put our best foot forward for the world. This event has been a success, may I say a grand success. We have heard world leaders who are here. We had 20 world leaders who attended this AI summit. In addition, we had 45 delegations represented at ministerial level from across the world. We also had 100 countries represented. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (17)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“The summit achieved a Guinness World Record for youth participation (over 2.5 lakh students).”

The press briefing notes that the summit achieved a Guinness World Record, confirming the record claim [S7].

Confirmedhigh

“The summit drew delegations from more than 100 countries.”

Stakeholder statements mention representation of more than 100 countries at the summit [S74].

Confirmedhigh

“20 world leaders attended the summit.”

The Ministry of External Affairs highlighted that 20 world leaders were present [S17].

Confirmedhigh

“Investment pledges surpassed $250 billion for infrastructure.”

The press briefing reports over $250 billion in infrastructure investment pledges [S7].

Additional Contextmedium

“AI‑Mission 1.0 operates 38 000 GPUs with an additional 20 000 planned, bringing total capacity to roughly 58 000.”

Comments on AI Mission indicate India aims for a total of 50-60 000 GPUs, aligning with the reported target range [S86].

Confirmedmedium

“AI‑Mission 2.0 will expand GPU capacity to about 58 000.”

The same source notes the goal of reaching 50-60 000 GPUs under the next phase of the mission [S86].

External Sources (87)
S1
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Ashwini Vaishnaw- Role/Title: Honorable Minister (appears to be instrumental in India’s semiconductor industry developm…
S2
Announcement of New Delhi Frontier AI Commitments — -Shri Ashwini Vaishnaw: Role/Title: Honorable Minister for Electronics and Information Technology, Area of expertise: El…
S3
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — -Ashwini Vaishnaw- Minister for Economic Electronics and Information Technology of India
S4
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S5
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S6
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S8
GermanAsian AI Partnerships Driving Talent Innovation the Future — -Mr. Govind Jaiswal- Title: Joint Secretary at the Ministry of Education of the Government of India; Area of expertise: …
S9
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S10
S11
Press Briefing by HMIT Ashwani Vaishnav on AI Impact Summit 2026 l Day 5 — -Speaker 4: Role/title not mentioned (made a brief interjection during the session)
S12
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S13
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S14
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S15
(Interactive Dialogue 4) Summit of the Future – General Assembly, 79th session — Felipe Paulier: Thank you to both co-chairs, Prime Minister of San Martin and Prime Minister of Jamaica, dear delegate…
S16
India plans to boost semiconductor industry with an ecosystem approach — The Indian government plans to lure investments from four to sixsemiconductorcompanies in the next year,according to Ind…
S17
https://dig.watch/event/india-ai-impact-summit-2026/press-briefing-by-hmit-ashwani-vaishnav-on-ai-impact-summit-2026-l-day-5 — Aur isme bahut mehnat karni padhegi. And we are prepared to put that hard work, put that effort. Our Prime Minister keep…
S18
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — And I have a deep belief that the entrepreneurial ecosystem in India is going to deliver some incredible global leaders …
S19
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — Industry representatives provided concrete examples of this collaboration in action. Sanjay Mehrotra from Micron describ…
S20
First round of informal consultations with member states, observers and stakeholders (2024) — Strict adherence to the speaking guidelines ensured statements were direct and concise, aiding the facilitation of an or…
S21
Informal multistakeholder session — The Chair maintained a neutral stance throughout. Discussions included the narrative that small developing countries, of…
S22
Building Indias Digital and Industrial Future with AI — The discussion maintained a collaborative and forward-looking tone throughout, with industry experts, regulators, and po…
S23
Opening plenary: Global Internet Governance processes — By bridging communication gaps, these experts significantly strengthen global governance and collaborative endeavours. I…
S24
Closure of the session — Additionally, the Group recommended the establishment of a future mechanism with distinct attributes: unity, inclusivene…
S25
Day 0 Event #61 Accelerating progress for unified digital cooperation — There was a moderate level of consensus among speakers on key issues, particularly on the need for collaborative and fle…
S26
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Dobbiamo condividere linee guida per orientare e guidare lo sviluppo dell ‘intelligenza artificiale nella piena concepol…
S27
Building Public Interest AI Catalytic Funding for Equitable Compute Access — India is building one of the world’s most ambitious public interest compute ecosystems with 38,000 GPUs as public infras…
S28
State of play of major global AI Governance processes — Regarding South Korea’s proactive engagement, the government showcased its dedication to the ethics of AI by embracing O…
S29
Responsible AI in India Leadership Ethics & Global Impact part1_2 — High level of consensus with significant implications for the responsible AI landscape. The agreement suggests that indu…
S30
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — This commitment is exemplified by the company-wide stance on facial recognition, which addresses ethical concerns surrou…
S31
Welcome Address — “strong IT background, dynamic startup ecosystem, make India a natural hub for affordable, scalable, and secure AI solut…
S32
WS #100 Integrating the Global South in Global AI Governance — Fadi Salim: Thank you. And this covers a little bit the grassroot element of it. So it’s awareness, diversity, inclusi…
S33
Global AI Policy Framework: International Cooperation and Historical Perspectives — Development | Capacity development | Legal and regulatory Global South Representation and Perspectives
S34
Announcement of New Delhi Frontier AI Commitments — Detailed mechanisms for how the anonymized insights will be collected and shared were not specified Specific implementa…
S35
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The GDC offers important benefits by creating consensus on definitions and directions for digital governance, providing …
S36
Press Briefing by HMIT Ashwani Vaishnav on AI Impact Summit 2026 l Day 5 — Because each of them spoke about AI and its role in public services. And secondly, the models that were launched under t…
S37
Advancing Scientific AI with Safety Ethics and Responsibility — The speakers demonstrated strong consensus on several key areas: the need for context-specific governance frameworks tai…
S38
Overview of AI policy in 10 jurisdictions — Summary: Brazil is working on its first AI regulation, with Bill No. 2338/2023 under review as of December 2024. Inspire…
S39
WS #172 Regulating AI and Emerging Risks for Children’s Rights — There is a positive trend of AI companies embracing safety by design principles and integrating them into their developm…
S40
INTRODUCTION — Rather than developing a framework of risks linked to general and thus cross-national assessments, it is t…
S41
Overview of AI policy in 15 jurisdictions — Summary China remains a global leader in AI, driven by significant state investment, a vast tech ecosystem and abundant …
S42
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — “What raw material is needed for AI?”[9]. “sovereign AI comes to India, we’ll have the control”[56]. “Indian government …
S43
Unlocking Multistakeholder Cooperation within the UN System: Global Partnerships for Open Internet — Echoing the European Union’s commitment to these ideals, the presenter underscores how the EU’s legislative agenda stead…
S44
OPENING STATEMENTS FROM STAKEHOLDERS — Negotiations on the treaty’s content are underway with the 46 member states of the organization along with observer stat…
S45
ETHIO PA 2025 — BharatNet (India): It is the largest optic fibre deployment programme in the world initiated by the Government of India …
S46
TABLE OF CONTENTS — The Policy therefore aims to address ICT infrastructure and other ecosystem gaps through the use of several policy instr…
S47
The Battle for Chips — Diversifying chip production is seen as an insurance policy against disruptions. By reducing dependency on a single coun…
S48
Building Public Interest AI Catalytic Funding for Equitable Compute Access — The discussion maintained a consistently pragmatic and solution-oriented tone throughout. While acknowledging significan…
S49
Artificial General Intelligence and the Future of Responsible Governance — So my humble opinion is that compute is one element in a chain of elements and that sometimes we treat this element as t…
S50
Hard power of AI — AI also has the potential to create conflicts as differing views empowered by AI clash. However, the rise of Artificial …
S51
Keynote-Demis Hassabis — Perhaps the most striking aspect of Hassabis’s presentation is his prediction regarding the imminent arrival of artifici…
S52
Rethinking Africa’s digital trade: Entrepreneurship, innovation, & value creation in the age of Generative AI (depHub) — In summary, the analysis raises critical concerns regarding data protection, privacy, and ethical considerations. It und…
S53
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Involving citizens in the decision-making process fosters inclusivity and builds trust. Government institutions must be …
S54
Discussion Report: Sovereign AI in Defence and National Security — Faisal responds to concerns about competing global AI policies by arguing that the sovereign AI framework is adaptable t…
S55
Folding Science / DAVOS 2025 — Mentions that AGI development may take a five-year timescale rather than the one or two years some are predicting.
S56
Press Briefing by HMIT Ashwani Vaishnav on AI Impact Summit 2026 l Day 5 — -AI Summit Success and Global Participation: Minister Vaishnaw highlighted the phenomenal success of India’s AI Impact S…
S57
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — And if you combine with the AI and you build your AI stack properly, you are looking for round the clock green power. So…
S58
Keynote by Mathias Cormann OECD Secretary-General India AI Impact — This observation is thought-provoking because it reveals the dramatic shift in global investment patterns and highlights…
S59
Building Public Interest AI Catalytic Funding for Equitable Compute Access — India is building one of the world’s most ambitious public interest compute ecosystems with 38,000 GPUs as public infras…
S60
Driving Indias AI Future Growth Innovation and Impact — And in a short span, they’ve surpassed it. It’s about 38 ,000. And a roadmap is by the end of this year, it’s going to c…
S61
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — India expanding GPU infrastructure from 38,000 to 62,000 GPUs within six months
S62
Responsible AI in India Leadership Ethics & Global Impact part1_2 — High level of consensus with significant implications for the responsible AI landscape. The agreement suggests that indu…
S63
Responsible AI in India Leadership Ethics & Global Impact — And now we’ve done that. Five years later, there is an open standard called the C2PA content credentials. If you browse …
S64
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Suzanne Akkabaoui:Thank you so much. Thank you for the opportunity to take part in this very interesting discussion. And…
S65
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — This commitment is exemplified by the company-wide stance on facial recognition, which addresses ethical concerns surrou…
S66
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Building India’s Role in Global Supply Chains: Discussion of making India an indispensable part of the global semicondu…
S67
The Battle for Chips — Addressing power consumption concerns in the semiconductor industry, India is actively engaged in research on advanced p…
S68
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — The semiconductor sector also saw significant commitments, with Micron’s Sanjay Mehrotra highlighting their advanced pac…
S69
WS #100 Integrating the Global South in Global AI Governance — Fadi Salim: Thank you. And this covers a little bit the grassroot element of it. So it’s awareness, diversity, inclusi…
S70
Artificial intelligence (AI) – UN Security Council — The global focus on Artificial Intelligence (AI) capacity-building efforts has been a significant topic of discussion am…
S71
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — 8 year old prodigy: Sharing is learning with the rest of the world. One, an AI that is independent. From large global A…
S72
Taking Stock — ## Global South Representation and Participation Barriers
S73
Keynote Adresses at India AI Impact Summit 2026 — -Ashwini Vaishnav- Minister (India) Multiple speakers emphasised India’s unique combination of technological capabiliti…
S74
https://dig.watch/event/india-ai-impact-summit-2026/welcome-address — I welcome all of you, heads of governments, global AI ecosystem leaders, and innovators to this summit. India is the sou…
S75
Book presentation: “Youth Atlas (Second edition)” | IGF 2023 Launch / Award Event #61 — Furthermore, the significant participation of young people at recent events is emphasised. Youth involvement, especially…
S76
IGF 2023 Global Youth Summit — Audience:Because this tendency of saying young people are the future. Young people are not the future. They are now. Tha…
S77
Designing Indias Digital Future AI at the Core 6G at the Edge — For India specifically, Saluja emphasized that the wireless nature of the economy makes this transformation particularly…
S78
(Day 5) General Debate – General Assembly, 79th session: morning session — Subrahmanyam Jaishankar – India: Madam President, Excellencies, distinguished members of the General Assembly, greeting…
S79
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — Himanshu Rai: Thank you very much. It’s always useful to be the last speaker because I can claim that I had the last wor…
S80
https://dig.watch/event/india-ai-impact-summit-2026/national-disaster-management-authority — Thanks Manish. Really a great question and probably in this room I’ll be calling something which is very very important …
S81
The New Delhi G20 Summit: Reflections from India — The G20’s commitment on overcoming challenges to implement Agenda 2030 and its SDGs was evident in the outcomes reported…
S82
World Economic Forum Annual Meeting Closing Remarks: Summary — – André Hoffmann- Larry Fink Annual Meeting Success and Achievements Annual meeting achieved exceptional success and r…
S83
Main Topic 1 –  Human Rights in the Digital Era: Europe’s Role in Safeguarding Human Rights Online  — Vessela Karloukovska:Good morning everybody. My name is Vessela Karloukovska and I’m a policy officer at DG Connect, the…
S84
Trade Deals or Disputes? / DAVOS 2025 — Vandita Pant, CFO of BHP, brought attention to the unprecedented demand for resources driven by development, energy tran…
S85
Rewriting Development / Davos 2025 — Lutfey Siddiqi: So I agree that we do need more funding, more concessional funding, better pricing of that funding, b…
S86
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Absolutely, Ankit, just trying to, this is something which I know two years back when we said that I’m putting 8000 GPUs…
S87
Leading in the Digital Era: How can the Public Sector prepare for the AI age? — Shri Sushil Pal:Thank you, Professor Jalasi, and thank you, UNESCO, for inviting me here. I must commend UNESCO on the r…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Ashwini Vaishnaw
6 arguments122 words per minute3272 words1600 seconds
Argument 1
Record‑level global participation, investment pledges and youth engagement demonstrate a phenomenal summit
EXPLANATION
Ashwini highlights that the summit attracted virtually every major AI player, a large number of startups, and massive youth involvement, indicating a high‑quality and impactful event. He also points to the substantial financial commitments secured during the summit.
EVIDENCE
He notes that the summit featured participation from practically every major AI player worldwide and many startups showcasing their work, describing the overall discussion as phenomenal [1-4]. He mentions involving two and a half lakh students, a Guinness World Record for that engagement [9-10]. He cites investment pledges exceeding $250 billion for infrastructure and $20 billion for VC deep-tech investments [11].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The press briefing highlights unprecedented international participation, a Guinness World Record for 2.5 lakh student engagement, and pledges of $250 billion for infrastructure and $20 billion for VC deep-tech investments [S7]; youth involvement is also emphasized in a UN assembly dialogue on young people’s participation [S15].
MAJOR DISCUSSION POINT
Summit success and impact
AGREED WITH
Randhir Jaiswal
Argument 2
“Manav AI” vision, sovereign model bouquet, AI Mission 1.0 milestones (38 k GPUs, 12 foundational models, 12 safety institutes) and launch of AI Mission 2.0
EXPLANATION
Ashwini outlines the government’s AI philosophy of ‘Manav AI’—human‑centric AI—and details concrete achievements of AI Mission 1.0, including hardware, models and safety institutes, before announcing the upcoming AI Mission 2.0.
EVIDENCE
He references Prime Minister Modi’s vision of ‘Manav AI’ and its acceptance by global AI players [5-6]. He describes the sovereign bouquet of models and the surprise of industry leaders at their quality despite limited resources [28-30]. He lists AI Mission 1.0 achievements: 38,000 GPUs (aim was 10,000) and a target of 58,000 soon [127-129]; 12 foundational models instead of the planned two [130-132]; and 12 AI safety institutes operating in a network [133-134]. He then signals the transition to AI Mission 2.0 as a larger next phase [121-124][125-128].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The briefing details the ‘Manav AI’ human-centric vision, achievement of 38 k GPUs, 12 foundational models and 12 safety institutes, and announces the upcoming AI Mission 2.0 [S7]; related milestones are referenced in the New Delhi Frontier AI commitments announcement [S2].
MAJOR DISCUSSION POINT
India’s AI vision and mission progress
DISAGREED WITH
Audience
Argument 3
Over 70‑80 countries have signed the Delhi Declaration; numerous MOUs and bilateral talks create a framework for future binding agreements
EXPLANATION
Ashwini reports that a growing number of countries have endorsed the Delhi Declaration, and that many memoranda of understanding and bilateral discussions have been concluded, laying groundwork for more formalized commitments.
EVIDENCE
He states that the previous action summit had about 60 signatories and the current count has already crossed 70, expecting to exceed 80 by the summit’s close [14-19][21-22]. He later emphasizes that real action is happening through MOUs and collaborative understandings signed in the past few days [122-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The summit briefing reports that the Delhi Declaration grew from 60 to over 70 signatories, with expectations of exceeding 80 countries [S7]; the official New Delhi Frontier AI commitments announcement also notes the expanding list of signatories [S2].
MAJOR DISCUSSION POINT
International collaboration and commitments
AGREED WITH
Audience
Argument 4
$250 billion in infrastructure investment, $20 billion VC deep‑tech pledges, expansion to 58 k GPUs, semiconductor plant inauguration and PAC‑Silica initiatives to lower compute and chip costs
EXPLANATION
Ashwini quantifies the financial scale of the summit’s outcomes, noting massive infrastructure and venture‑capital pledges, a rapid increase in GPU capacity, and strategic moves in semiconductor manufacturing and supply‑chain resilience.
EVIDENCE
He cites $250 billion in infrastructure-related investments and $20 billion in VC deep-tech commitments [11]. He mentions the goal of 10,000 GPUs being surpassed with 38,000 already deployed and plans for an additional 20,000, bringing the total toward 58,000 [127-129]. He announces the upcoming foundation-laying ceremony for a new semiconductor plant in Uttar Pradesh and the imminent commercial start of a large Micron facility [45-46]. He also refers to the PAC-Silica initiative as a key step toward reducing compute and chip costs [122-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Infrastructure pledges of $250 billion, VC deep-tech commitments of $20 billion, GPU target of 58 k, a new semiconductor plant foundation ceremony and the PAC-Silica cost-reduction initiative are all outlined in the press briefing [S7]; semiconductor plant plans are further described in a separate announcement [S1]; Micron’s $2.75 billion investment exemplifies concrete sector investment [S19].
MAJOR DISCUSSION POINT
Infrastructure, investment and economic viability
AGREED WITH
Audience
DISAGREED WITH
Audience
Argument 5
Engagement of 2.5 lakh students (Guinness World Record), emphasis on AI diffusion to the “last person,” and inclusive growth as a core policy
EXPLANATION
Ashwini emphasizes the massive youth participation, the goal of extending AI benefits to every citizen, and frames these efforts within the broader agenda of inclusive development championed by the government.
EVIDENCE
He notes that two and a half lakh students were involved, earning a Guinness World Record for that scale of engagement [9-10]. He later stresses the commitment to reach the “last person” in society, linking it to inclusive growth and citing examples of past government programmes such as Jan Dhan and Swachh Bharat [144-156].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The briefing records a Guinness World Record for 2.5 lakh student participants and stresses reaching the ‘last person’ as a policy goal [S7]; youth participation’s significance is echoed in a UN assembly dialogue [S15] and a ministerial comment on inclusive growth references the same objective [S17].
MAJOR DISCUSSION POINT
Education, democratization and diffusion
AGREED WITH
Audience
Argument 6
Introduction of SGI amendments, AI safety institutes, data‑protection framework, high‑risk AI demarcation and child‑safety measures
EXPLANATION
Ashwini outlines recent regulatory steps, including the adoption of SGI (Synthetic and Generated Content) amendments, the establishment of safety institutes, and the formulation of data‑protection and high‑risk AI guidelines, all aimed at safeguarding users, especially children.
EVIDENCE
He describes collaboration with industry on curriculum and standards, and then details the SGI amendments that have been accepted globally, emphasizing the need for transparency about synthetic content [228-234]. He asserts that the SGI framework aligns illegal offline content with online illegality, reinforcing a constitutional mandate [235-238]. He also mentions a strong data-protection framework and notes that three countries have expressed interest in adopting India’s template [242-244].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
SGI amendments, the establishment of AI safety institutes, a data-protection framework and child-safety provisions are detailed in the summit briefing [S7]; the New Delhi Frontier AI commitments announcement also references these regulatory steps [S2].
MAJOR DISCUSSION POINT
Regulation, data protection and ethical AI frameworks
R
Randhir Jaiswal
2 arguments135 words per minute85 words37 seconds
Argument 1
Presence of 20 world leaders, 45 ministerial delegations and 100 countries confirms a grand international success
EXPLANATION
Randhir quantifies the high‑level diplomatic attendance, underscoring the summit’s status as a major global gathering on AI.
EVIDENCE
He reports that 20 world leaders attended, 45 ministerial delegations were represented, and a total of 100 countries participated in the summit [362-368].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The press briefing confirms attendance of 20 world leaders, 45 ministerial delegations and representation from 100 countries [S7]; a ministerial comment reiterates the 20 world leaders figure [S17].
MAJOR DISCUSSION POINT
Summit success and impact
AGREED WITH
Ashwini Vaishnaw
Argument 2
International delegations are expected to translate into concrete investment flows supporting India’s semiconductor and AI ecosystem
EXPLANATION
Randhir suggests that the presence of numerous foreign delegations will lead to tangible investment commitments that will bolster India’s semiconductor and AI sectors.
EVIDENCE
He reiterates the numbers of world leaders, ministerial delegations and country representations, implying that such diplomatic engagement is a catalyst for future investment flows [362-368].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The briefing links diplomatic participation to $250 billion infrastructure and $20 billion VC pledges, indicating expected investment flows [S7]; Micron’s $2.75 billion semiconductor investment provides a concrete example [S19].
MAJOR DISCUSSION POINT
Infrastructure, investment and economic viability
S
Speaker 1
3 arguments91 words per minute195 words127 seconds
Argument 1
Structured Q&A ensured orderly discourse and maintained the summit’s professional standards
EXPLANATION
Speaker 1 managed the question‑and‑answer session, directing participants to identify themselves and follow a systematic order, thereby keeping the discussion organized.
EVIDENCE
He asks each participant to state their name and organization before asking a question and repeatedly calls for the next question, e.g., “Thank you, sir. First of all, please identify yourself…”, “Next. Please.” [47-49][66][104][121].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of strict speaking guidelines to ensure orderly discourse is documented in the informal consultation report [S20].
MAJOR DISCUSSION POINT
Summit success and impact
Argument 2
Moderation facilitated discussion on global consensus and the need for a unified governance approach
EXPLANATION
Speaker 1’s role as moderator helped surface concerns about the voluntary nature of commitments and guided the conversation toward the need for shared governance mechanisms.
EVIDENCE
He repeatedly prompts the audience for questions, ensuring that topics such as frontier-AI commitments and the Delhi Declaration receive attention, e.g., “Next. Please.” and “Anybody else on front row?” [47-49][66][104][121].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Facilitation of global consensus and a unified governance approach is described in the multistakeholder session summary [S21].
MAJOR DISCUSSION POINT
International collaboration and commitments
Argument 3
Moderation helped surface regulatory concerns and ensured they were addressed systematically
EXPLANATION
Through his facilitation, Speaker 1 brought forward audience queries about guardrails, ethical guidelines and legal mechanisms, allowing the minister to respond in a structured manner.
EVIDENCE
He calls on participants with regulatory questions, such as those about AI guardrails and data protection, and manages the flow of these questions throughout the session [47-49][66][104][121].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The session’s collaborative tone, bringing regulators into the dialogue and addressing regulatory concerns, is highlighted in the discussion overview [S22].
MAJOR DISCUSSION POINT
Regulation, data protection and ethical AI frameworks
A
Audience
5 arguments154 words per minute2430 words946 seconds
Argument 1
Clarification sought on the focus areas and priorities of the upcoming AI Mission 2.0
EXPLANATION
An audience member asks for details on what the next phase of India’s AI mission will concentrate on and which priority areas will be emphasized.
EVIDENCE
The question asks how far and how long it will take to reach the “last person” in India, indicating a request for clarification on the diffusion and focus of AI Mission 2.0 [105-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The press briefing announces AI Mission 2.0 and notes audience queries about its focus and priority areas [S7].
MAJOR DISCUSSION POINT
India’s AI vision, mission achievements and future roadmap
AGREED WITH
Ashwini Vaishnaw
DISAGREED WITH
Ashwini Vaishnaw
Argument 2
Concern that voluntary frontier‑AI commitments and non‑binding declarations may remain on paper without enforcement mechanisms
EXPLANATION
An audience participant expresses worry that the voluntary nature of the frontier‑AI commitments and the non‑binding Delhi Declaration could limit their practical impact.
EVIDENCE
The audience repeats that the frontier AI commitment is voluntary and asks how to ensure it does not remain merely on paper [88-90].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The New Delhi Frontier AI commitments announcement outlines the voluntary nature of the frontier-AI pledge [S2]; the summit briefing also records concerns about the non-binding status of the Delhi Declaration [S7].
MAJOR DISCUSSION POINT
International collaboration and commitments
AGREED WITH
Ashwini Vaishnaw
DISAGREED WITH
Ashwini Vaishnaw
Argument 3
Queries on the high cost of compute, chip pricing and the possibility of government viability‑gap funding for socially beneficial AI applications
EXPLANATION
Several audience members inquire about the expense of compute resources and chips, and whether the government will provide financial support for AI projects that deliver social benefits but lack immediate ROI.
EVIDENCE
Questions raise concerns about compute and chip costs, referencing the PAC-Silica arrangement, and ask if the government will offer viability-gap funding for socially valuable AI applications [268-272][274-277].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The PAC-Silica initiative aimed at lowering compute and chip costs is described in the briefing [S7]; Micron’s substantial semiconductor investment provides context on chip pricing and industry funding [S19].
MAJOR DISCUSSION POINT
Infrastructure, investment and economic viability
AGREED WITH
Ashwini Vaishnaw
DISAGREED WITH
Ashwini Vaishnaw
Argument 4
Request for nationwide AI training modules for school children (ages 5‑10) and affordable AI skill programs
EXPLANATION
An audience member seeks information on whether the government will launch AI education modules for young school‑age children and affordable training programmes.
EVIDENCE
The participant asks if a module will be started in schools to train children aged five to ten in AI, mentioning the need for such a programme [245-250].
MAJOR DISCUSSION POINT
Education, democratization and diffusion to the masses
AGREED WITH
Speaker 4
Argument 5
Demand for concrete guardrails, ethical guidelines, and legal mechanisms to enforce AI declarations and protect data
EXPLANATION
The audience calls for specific policy instruments, such as ethical guardrails and legal frameworks, to ensure that AI commitments are actionable and data protection is upheld.
EVIDENCE
The question asks whether there are guidelines or guardrails that have been prepared by the participating countries for responsible AI use, and requests a paper outlining the first set of blueprints [68-71].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guardrails such as SGI amendments, AI safety institutes and a data-protection framework are detailed in the summit briefing [S7]; the frontier AI commitments announcement references the need for ethical guidelines and enforcement mechanisms [S2].
MAJOR DISCUSSION POINT
Regulation, data protection and ethical AI frameworks
AGREED WITH
Ashwini Vaishnaw
DISAGREED WITH
Ashwini Vaishnaw
S
Speaker 4
2 arguments17 words per minute10 words34 seconds
Argument 1
State governments must be actively involved in implementing AI initiatives and curricula at the grassroots level
EXPLANATION
Speaker 4 stresses that state governments need to play a central role in delivering AI education and ensuring that AI initiatives reach local communities.
EVIDENCE
He replies in Hindi that questions should be asked in Hindi and indicates that state governments will be involved, emphasizing the need for effort and hard work at the state level [212-218]. Later, he mentions that state governments will closely participate in curriculum development and implementation [228-230].
MAJOR DISCUSSION POINT
Education, democratization and diffusion to the masses
AGREED WITH
Audience
Argument 2
State governments’ participation is crucial for delivering AI education and ensuring equitable access across regions
EXPLANATION
Speaker 4 reiterates that state governments are essential partners for scaling AI education and guaranteeing that all regions benefit equally.
EVIDENCE
He again emphasizes the role of state governments in collaborating with industry to create curricula and reach the “last person,” noting that state governments are the primary channel for nationwide outreach [212-218][228-230].
MAJOR DISCUSSION POINT
Education, democratization and diffusion to the masses
Agreements
Agreement Points
Record‑level global participation, investment pledges and youth engagement demonstrate a phenomenal summit
Speakers: Ashwini Vaishnaw, Randhir Jaiswal
Record‑level global participation, investment pledges and youth engagement demonstrate a phenomenal summit Presence of 20 world leaders, 45 ministerial delegations and 100 countries confirms a grand international success
Both speakers highlight the unprecedented scale of the summit, noting participation of virtually every major AI player, massive youth involvement (2.5 lakh students) and substantial financial commitments, as well as the attendance of 20 world leaders, 45 ministerial delegations and representatives from 100 countries [1-4][9-11][362-368].
Over 70‑80 countries have signed the Delhi Declaration; numerous MOUs and bilateral talks create a framework for future binding agreements
Speakers: Ashwini Vaishnaw, Audience
Over 70‑80 countries have signed the Delhi Declaration; numerous MOUs and bilateral talks create a framework for future binding agreements Concern that voluntary frontier‑AI commitments and non‑binding declarations may remain on paper without enforcement mechanisms
Ashwini reports that the declaration already has more than 70 signatories and that many MOUs have been signed, stressing real action beyond paper, while audience members express worry that the voluntary nature of the commitments could limit their impact, prompting the minister to assure implementation [14-22][122-124][68-71][88-90].
POLICY CONTEXT (KNOWLEDGE BASE)
The broad international endorsement mirrors the UN-led multistakeholder push where over 70 countries backed the Delhi Declaration, reflecting a trend of voluntary AI commitments that lack binding enforcement [S43]. The New Delhi Frontier AI commitments similarly omit detailed implementation mechanisms, underscoring the provisional nature of the framework [S34].
“Manav AI” vision, sovereign model bouquet, AI Mission 1.0 milestones (38k GPUs, 12 foundational models, 12 safety institutes) and launch of AI Mission 2.0
Speakers: Ashwini Vaishnaw, Audience
“Manav AI” vision, sovereign model bouquet, AI Mission 1.0 milestones (38k GPUs, 12 foundational models, 12 safety institutes) and launch of AI Mission 2.0 Clarification sought on the focus areas and priorities of the upcoming AI Mission 2.0
Ashwini outlines the achievements of AI Mission 1.0-including 38,000 GPUs, a bouquet of 12 foundational models and 12 safety institutes-and announces the larger AI Mission 2.0, while audience members ask for details on the priority areas and timeline of the next phase [127-134][121-125][105-108][279-283].
POLICY CONTEXT (KNOWLEDGE BASE)
The sovereign AI approach aligns with analyses emphasizing a full-stack sovereign AI strategy to retain national control over data and models, and to adapt regulatory layers to local contexts [S42][S54].
Introduction of SGI amendments, AI safety institutes, data‑protection framework, high‑risk AI demarcation and child’s‑safety measures
Speakers: Ashwini Vaishnaw, Audience
Introduction of SGI amendments, AI safety institutes, data‑protection framework, high‑risk AI demarcation and child’s‑safety measures Demand for concrete guardrails, ethical guidelines, and legal mechanisms to enforce AI declarations and protect data
Ashwini describes the newly adopted SGI amendments, the network of AI safety institutes, a strong data-protection framework and measures for high-risk AI and child safety, while audience participants request specific ethical guardrails and legal mechanisms to make the commitments actionable [228-238][242-244][68-71].
POLICY CONTEXT (KNOWLEDGE BASE)
Context-specific governance frameworks that include pre-deployment safety assessments and child-focused safeguards have been advocated as best practice for developing economies, highlighting the need for explicit safety institutes and data-protection rules [S37][S39][S53].
Engagement of 2.5 lakh students (Guinness World Record), emphasis on AI diffusion to the “last person,” and inclusive growth as a core policy
Speakers: Ashwini Vaishnaw, Audience
Engagement of 2.5 lakh students (Guinness World Record), emphasis on AI diffusion to the “last person,” and inclusive growth as a core policy Queries on the high cost of compute, chip pricing and the possibility of government viability‑gap funding for socially beneficial AI applications
Ashwini stresses that the summit involved 250,000 students, that AI benefits must reach the ‘last person’ and that inclusive growth underpins the government’s approach; audience members echo this concern by asking how quickly AI can reach the most remote citizens and whether the government will support socially valuable AI projects financially [144-156][105-108].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s BharatNet programme, the world’s largest fibre-to-the-last-mile initiative, provides the connectivity backbone envisioned for reaching the “last person” with AI services, reinforcing inclusive growth goals [S45][S46].
State governments must be actively involved in implementing AI initiatives and curricula at the grassroots level
Speakers: Speaker 4, Audience
State governments must be actively involved in implementing AI initiatives and curricula at the grassroots level Request for nationwide AI training modules for school children (ages 5‑10) and affordable AI skill programs
Speaker 4 argues that state governments are essential partners for delivering AI education and ensuring outreach to all regions, while audience members request concrete school-level AI training modules, indicating shared emphasis on state-driven grassroots implementation [212-218][228-230][245-250].
POLICY CONTEXT (KNOWLEDGE BASE)
Sovereign AI strategies stress the role of sub-national actors in tailoring AI deployment to local contexts, a view echoed in policy discussions on decentralized governance and state-level implementation [S42][S54].
$250 billion in infrastructure investment, $20 billion VC deep‑tech pledges, expansion to 58 k GPUs, semiconductor plant inauguration and PAC‑Silica initiatives to lower compute and chip costs
Speakers: Ashwini Vaishnaw, Audience
$250 billion in infrastructure investment, $20 billion VC deep‑tech pledges, expansion to 58 k GPUs, semiconductor plant inauguration and PAC‑Silica initiatives to lower compute and chip costs Queries on the high cost of compute, chip pricing and the possibility of government viability‑gap funding for socially beneficial AI applications
Ashwini cites over $250 billion of infrastructure pledges, $20 billion VC commitments, a rapid increase in GPU capacity and the upcoming semiconductor plant and PAC-Silica programme aimed at reducing compute and chip costs; audience members raise concerns about the expense of compute and chips and ask whether the government will provide viability-gap funding for socially valuable AI projects [45-46][122-124][127-129][130-134][45-46][122-124][268-272][274-277].
POLICY CONTEXT (KNOWLEDGE BASE)
National chip-production diversification and public-interest catalytic funding are highlighted as mechanisms to reduce compute costs and build resilient supply chains, aligning with global calls for chip resilience and equitable compute access [S47][S48].
Similar Viewpoints
Both officials portray the summit as a historic, high‑impact event marked by unprecedented global participation, extensive youth involvement and massive financial commitments [1-4][9-11][362-368].
Speakers: Ashwini Vaishnaw, Randhir Jaiswal
Record‑level global participation, investment pledges and youth engagement demonstrate a phenomenal summit Presence of 20 world leaders, 45 ministerial delegations and 100 countries confirms a grand international success
Both stress the necessity of concrete regulatory guardrails and legal mechanisms to ensure responsible AI use and data protection [228-238][242-244][68-71].
Speakers: Ashwini Vaishnaw, Audience
Introduction of SGI amendments, AI safety institutes, data‑protection framework, high‑risk AI demarcation and child’s‑safety measures Demand for concrete guardrails, ethical guidelines, and legal mechanisms to enforce AI declarations and protect data
Both emphasize that AI benefits must reach every citizen, especially the most remote, and that the government should support socially valuable AI initiatives, even where immediate ROI is lacking [144-156][105-108].
Speakers: Ashwini Vaishnaw, Audience
Engagement of 2.5 lakh students (Guinness World Record), emphasis on AI diffusion to the “last person,” and inclusive growth as a core policy Queries on the high cost of compute, chip pricing and the possibility of government viability‑gap funding for socially beneficial AI applications
Both acknowledge the progress of AI Mission 1.0 and seek clarity on the strategic priorities of the forthcoming AI Mission 2.0 [127-134][121-125][105-108][279-283].
Speakers: Ashwini Vaishnaw, Audience
“Manav AI” vision, sovereign model bouquet, AI Mission 1.0 milestones (38k GPUs, 12 foundational models, 12 safety institutes) and launch of AI Mission 2.0 Clarification sought on the focus areas and priorities of the upcoming AI Mission 2.0
Both call for strong state‑level participation to deliver AI education and skill development to school‑age children across the country [212-218][228-230][245-250].
Speakers: Speaker 4, Audience
State governments must be actively involved in implementing AI initiatives and curricula at the grassroots level Request for nationwide AI training modules for school children (ages 5‑10) and affordable AI skill programs
Unexpected Consensus
Alignment between state‑government involvement and national inclusive‑growth agenda
Speakers: Speaker 4, Ashwini Vaishnaw
State governments must be actively involved in implementing AI initiatives and curricula at the grassroots level Engagement with state governments for curriculum development and outreach
While Speaker 4 focuses on the operational role of state governments, Ashwini, whose primary narrative is national-level policy and inclusive growth, also stresses collaboration with state governments for curriculum and outreach, revealing an unexpected convergence on the importance of sub-national actors in achieving the inclusive-growth goal [212-218][228-230].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy instruments that combine supply-side infrastructure investment with demand-side capacity building are designed to synchronize state-level actions with broader inclusive-growth objectives, as outlined in recent ICT policy frameworks [S46].
Overall Assessment

The speakers displayed strong convergence on the summit’s success, the scale of international participation, the transition from voluntary declarations to concrete MOUs, the progress and future direction of India’s AI Mission, the need for robust regulatory frameworks (SGI, data protection, guardrails), and the commitment to inclusive growth that reaches the last citizen. State governments and grassroots education were also identified as critical implementation partners.

High consensus – the alignment across ministries, the moderator, and audience questions indicates a unified policy stance, which enhances credibility for the Delhi Declaration, strengthens expectations of forthcoming investments, and signals a coordinated approach to AI governance, capacity building and inclusive diffusion.

Differences
Different Viewpoints
Voluntary frontier‑AI commitments and non‑binding Delhi Declaration may remain ineffective without enforcement mechanisms
Speakers: Ashwini Vaishnaw, Audience
Over 70–80 countries have signed the Delhi Declaration; numerous MOUs and bilateral talks create a framework for future binding agreements Concern that voluntary frontier‑AI commitments and non‑binding declarations may remain on paper without enforcement mechanisms
Ashwini stresses that a large number of countries have already signed the declaration and that real action is underway through MOUs and bilateral talks [14-19][122-124], while audience members worry that the voluntary nature of the frontier-AI pledge and the non-binding status of the Delhi Declaration could leave them as mere paperwork, calling for concrete enforcement mechanisms [88-90][68-71].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of the New Delhi Frontier AI commitments note the absence of concrete collection, sharing, and evaluation mechanisms, raising concerns about enforceability [S34]. Similar critiques of voluntary digital compacts highlight limited impact without binding obligations [S35].
Assurances on reducing compute and chip costs versus need for concrete guarantees and viability‑gap funding
Speakers: Ashwini Vaishnaw, Audience
$250 billion in infrastructure investment, $20 billion VC deep‑tech pledges, expansion to 58 k GPUs, semiconductor plant inauguration and PAC‑Silica initiatives to lower compute and chip costs Queries on the high cost of compute, chip pricing and the possibility of government viability‑gap funding for socially beneficial AI applications
Ashwini points to the $250 billion infrastructure pledge, the upcoming semiconductor plant and the PAC-Silica programme as steps to make compute and chips cheaper [122-124][45-46], whereas audience members ask whether there are concrete assurances on chip pricing and suggest the government might need to provide viability-gap funding for AI projects that deliver social benefits but lack immediate ROI [268-277].
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on chip diversification and public-interest catalytic funding stress the necessity of guaranteed financing to bridge the viability gap for compute-intensive projects [S47][S48].
Scope and concrete priorities of AI Mission 2.0 and the timeline for reaching the ‘last person’
Speakers: Ashwini Vaishnaw, Audience
“Manav AI” vision, sovereign model bouquet, AI Mission 1.0 milestones (38 k GPUs, 12 foundational models, 12 safety institutes) and launch of AI Mission 2.0 Clarification sought on the focus areas and priorities of the upcoming AI Mission 2.0
Ashwini announces the transition to AI Mission 2.0, highlighting past milestones and a broader vision for human-centric AI [121-125], while audience members request specific details on the focus areas, priority sectors and the practical steps needed to ensure AI benefits reach the ‘last person’ in India, indicating uncertainty about how the mission will be operationalised [105-108][144-152].
Existence and detail of guardrails, ethical guidelines and data‑protection mechanisms
Speakers: Ashwini Vaishnaw, Audience
Introduction of SGI amendments, AI safety institutes, data‑protection framework, high‑risk AI demarcation and child’s safety measures Demand for concrete guardrails, ethical guidelines, and legal mechanisms to enforce AI declarations and protect data
Ashwini asserts that SGI amendments, a network of AI safety institutes and a strong data-protection framework have already been put in place, positioning them as the needed guardrails [228-244], while audience members ask for a tangible paper outlining the first set of blueprints, ethical guardrails and enforcement tools, suggesting that the announced measures lack sufficient detail or public availability [68-71].
POLICY CONTEXT (KNOWLEDGE BASE)
International best-practice recommendations call for explicit guardrails, ethical standards, and robust data-protection regimes, especially for high-risk AI and children’s rights [S37][S39][S53].
Imminent arrival of Artificial General Intelligence (AGI) and national preparedness
Speakers: Audience, Ashwini Vaishnaw
Does the government believe that AGI is coming in the next two years? … are we prepared? “Manav AI” vision, sovereign model bouquet, AI Mission 1.0 milestones (38 k GPUs, 12 foundational models, 12 safety institutes) and launch of AI Mission 2.0
An audience member asks whether the government expects AGI within two years and if India is ready for it, seeking a clear stance on near-term existential AI risks [250-252], while Ashwini’s responses focus on the broader AI Mission roadmap and do not directly address the AGI timeline, leaving a gap between expectations and official positioning [121-125].
POLICY CONTEXT (KNOWLEDGE BASE)
Expert commentary varies on AGI timelines, with some forecasting a five-year horizon while others caution against premature expectations, underscoring uncertainty for national preparedness strategies [S49][S50][S51][S55].
Unexpected Differences
Data‑protection concerns about foreign AI services versus claim of a strong national data‑protection framework
Speakers: Audience, Ashwini Vaishnaw
Sir, my question is about data protection because we see OpenAI, ChatGPT and Microsoft taking access to all the data … do you think all Indians are taking their data? The data protection framework is very strong … three countries have said they want to make their data protection framework equal to India’s
The audience raises immediate worries about personal data being accessed by global AI providers, a point not directly addressed by Ashwini’s broad claim of a strong data-protection regime, revealing an unexpected tension between perceived data security and actual policy details [223-226][242-244].
POLICY CONTEXT (KNOWLEDGE BASE)
Comparative analyses of emerging AI regulations, such as Brazil’s risk-based AI bill, illustrate the challenges of establishing strong data-protection frameworks that can address cross-border AI services [S38][S53].
Expectation of AGI emergence within two years versus lack of explicit government position
Speakers: Audience, Ashwini Vaishnaw
Does the government believe that AGI is coming in the next two years? … are we prepared? “Manav AI” vision, sovereign model bouquet, AI Mission 1.0 milestones … and launch of AI Mission 2.0
While the audience explicitly asks for a stance on the near-term arrival of AGI, Ashwini’s response focuses on broader mission milestones without addressing AGI timelines, an unexpected gap between stakeholder expectations and official communication [250-252][121-125].
POLICY CONTEXT (KNOWLEDGE BASE)
While some thought leaders forecast AGI within a few years, broader scholarly assessments suggest a longer horizon, highlighting the gap between optimistic predictions and the absence of an official policy stance [S55][S51][S49].
Overall Assessment

The principal disagreements revolve around (i) the enforceability of voluntary AI commitments and the Delhi Declaration, (ii) concrete assurances on compute‑chip cost reductions and potential viability‑gap funding, (iii) the concrete focus, priorities and timeline of AI Mission 2.0 and its diffusion to the ‘last person’, (iv) the specificity and public availability of ethical guardrails and data‑protection mechanisms, and (v) expectations about near‑term AGI emergence. While all speakers share a common vision of positioning India as a global AI hub and promoting inclusive growth, they diverge on the mechanisms, legal bindingness and operational details needed to translate rhetoric into actionable outcomes.

The level of disagreement is moderate to high. It reflects substantive gaps between high‑level political messaging and the detailed policy instruments expected by stakeholders. If unresolved, these gaps could undermine confidence in the summit’s outcomes, delay implementation of commitments, and limit the perceived credibility of India’s AI governance framework.

Partial Agreements
All three speakers concur that the summit achieved a high‑profile international gathering and that such participation is essential for advancing India’s AI agenda; however, Ashwini emphasizes youth involvement and MOUs, Randhir highlights diplomatic numbers, and Speaker 1 stresses procedural order, reflecting different views on what constitutes success [1-4][14-19][362-368][47-49].
Speakers: Ashwini Vaishnaw, Randhir Jaiswal, Speaker 1
Record‑level global participation, investment pledges and youth engagement demonstrate a phenomenal summit Presence of 20 world leaders, 45 ministerial delegations and 100 countries confirms a grand international success Structured Q&A ensured orderly discourse and maintained the summit’s professional standards
Both Ashwini and audience members share the goal of inclusive growth and reaching every citizen with AI benefits; Ashwini cites the Guinness record and policy rhetoric, while the audience asks for concrete timelines and mechanisms to achieve that diffusion, indicating agreement on the end‑goal but differing on the path forward [9-10][144-156][105-108].
Speakers: Ashwini Vaishnaw, Audience
Engagement of 2.5 lakh students (Guinness World Record), emphasis on AI diffusion to the “last person,” and inclusive growth as a core policy Clarification sought on diffusion to the last person and how inclusive growth will be operationalised
Both parties agree that AI safety and ethical safeguards are necessary; Ashwini points to existing institutes and SGI amendments, whereas the audience seeks a more detailed, publicly available set of guardrails and enforcement provisions, showing consensus on the objective but divergence on implementation detail [132-134][228-244][68-71].
Speakers: Ashwini Vaishnaw, Audience
AI safety institutes, SGI amendments and child‑safety measures provide ethical safeguards Demand for concrete guardrails, ethical guidelines, and legal mechanisms to enforce AI declarations and protect data
Takeaways
Key takeaways
The AI Impact Summit was a landmark success with record global participation, investment pledges and massive youth engagement. India’s AI vision – “Manav AI” – and the achievements of AI Mission 1.0 (38,000 GPUs, 12 foundational models, 12 safety institutes) were highlighted, with a launch of AI Mission 2.0 announced. Over 70‑80 countries have signed or are expected to sign the Delhi Declaration, creating a broad international consensus on AI cooperation. Substantial infrastructure commitments were announced: $250 bn in AI‑related infrastructure, $20 bn VC deep‑tech pledges, new semiconductor plant in Uttar Pradesh and the PAC‑Silica initiative to lower compute and chip costs. Education and diffusion were emphasized – 2.5 lakh students participated, and the government pledged to bring AI training to schools and ensure AI reaches the “last person”. Regulatory steps were outlined: SGI amendments, AI safety institutes, data‑protection framework, high‑risk AI demarcation and child‑safety measures.
Resolutions and action items
Finalize and publish the Delhi Declaration (target 80+ signatories). Launch AI Mission 2.0 with higher targets (expand to ~58,000 GPUs, add more foundational models, strengthen safety institutes). Inaugurate the semiconductor plant in Uttar Pradesh and commence commercial production at Micron facility on 28 May. Implement the PAC‑Silica semiconductor ecosystem partnership to reduce chip and compute costs. Develop and roll out AI curriculum modules for schools and higher‑education institutions in collaboration with industry. Create a viability‑gap funding mechanism under Mission 2.0 for socially beneficial AI applications lacking immediate ROI. Establish a legal framework for AI‑related cyber‑crime and data‑protection, aligning with SGI amendments. Continue bilateral MOUs and collaborations with global AI firms and nations to translate summit commitments into concrete projects.
Unresolved issues
How the voluntary frontier‑AI commitments and the non‑binding Delhi Declaration will be enforced and monitored. Specific guardrails, ethical guidelines and a concrete blueprint for responsible AI that all signatories will adopt. Timeline and terms for the remaining nine foundational models and the equity/share arrangements with partner companies. Detailed cost‑reduction strategies for compute and chips, and the extent of government subsidies or guarantees. Exact mechanisms for reaching the “last person” in remote or underserved regions, especially regarding 5G and connectivity. Consensus on the definition and governance of high‑risk AI and whether it will be nationally or internationally demarcated. Clarification on the role of global big‑tech firms in Indian public‑service AI deployments. Implementation schedule for the SGI amendments, especially the three‑year takedown window and provenance requirements. Status of the long‑pending TRP guidelines overhaul for traditional media. Formation of a G20‑like binding agreement group on AI and the legal framework for AI‑related cyber‑crime.
Suggested compromises
Treat the frontier‑AI commitments as voluntary but back them with MOUs and concrete collaborative projects to move beyond paper. Maximize the number of signatories to the Delhi Declaration while acknowledging that the declaration remains non‑binding, with a promise to work toward future binding accords. Balance the need for rapid AI diffusion with inclusive growth by involving state governments in curriculum development and rollout. Offer government‑backed viability‑gap funding for socially valuable AI use‑cases while still encouraging private sector investment. Adopt an open‑minded stance to feedback from industry, media and civil society, promising iterative refinement of regulations and guardrails.
Thought Provoking Comments
Manav AI – AI of the humans, by the humans, for the humans – was presented as the core vision for responsible and ethical AI.
It framed the entire summit around a human‑centric, ethical approach, differentiating India’s AI agenda from purely commercial or militaristic narratives.
Set the tone for the discussion, prompting subsequent questions about guardrails, ethical guidelines, and inclusive growth. It also helped the minister justify the emphasis on responsible AI and the need for a sovereign model bouquet.
Speaker: Ashwini Vaishnaw
Involving two and a half lakh (250,000) students, achieving a Guinness World Record for participation.
Demonstrated massive grassroots engagement and positioned youth as a strategic asset for India’s AI future, highlighting a concrete metric of outreach.
Shifted the conversation toward diffusion and inclusivity, leading to questions about reaching the ‘last person’ in India and the role of education in AI adoption.
Speaker: Ashwini Vaishnaw
India has secured over $250 billion in infrastructure investment and $20 billion in VC deep‑tech commitments as a result of the summit.
Provided tangible economic evidence of the summit’s impact, moving the dialogue from abstract policy to real financial stakes.
Prompted follow‑up queries about concrete MOUs, the implementation of AI Mission 2.0, and how these funds will be channelled into specific projects such as semiconductor and compute infrastructure.
Speaker: Ashwini Vaishnaw
India’s ‘sovereign bouquet of models’ built with frugal resources can match the quality of frontier labs.
Challenged the prevailing belief that only well‑funded global labs can produce high‑quality AI models, showcasing India’s technical capability and self‑reliance.
Encouraged deeper discussion on model development, AI safety institutes, and the upcoming AI Mission 2.0, while reinforcing confidence among international partners.
Speaker: Ashwini Vaishnaw
Audience (Economic Times): “Have there been certain guidelines or guardrails agreed by all participating countries for ethical and responsible AI? Is there a paper that can be shared as a first blueprint?”
Moved the debate from rhetoric to the need for a concrete, documented framework, pressing the government for deliverables.
Triggered the minister’s detailed response about real MOUs, the AI Mission 2.0 roadmap, and the distinction between voluntary commitments and actionable policy, thereby deepening the policy‑implementation thread.
Speaker: Deepak Ajwani (Economic Times)
Audience (Money Control): “The frontier AI commitment is voluntary and the Delhi Declaration non‑binding. How do we ensure this does not remain on paper?”
Directly challenged the enforceability of the summit’s outcomes, questioning the credibility of voluntary pledges.
Prompted the minister to stress existing MOUs, the creation of AI safety institutes, and the upcoming AI Mission 2.0 as mechanisms that translate pledges into measurable actions.
Speaker: Anonymous (Money Control)
Audience (BBC): “The U.S. delegation rejected calls for global AI governance. Doesn’t that contradict the summit’s aim of a unified path?”
Introduced a geopolitical tension, highlighting a major dissenting voice and testing India’s stance on global AI governance.
Led the minister to emphasize India’s role as a trusted partner, the broad international participation, and the importance of building consensus despite divergent national positions.
Speaker: Arunodai Mukherjee (BBC)
Announcement of AI Mission 2.0: 38,000 GPUs (with 20,000 more soon), 12 foundational models, 12 AI‑safety institutes, and a roadmap for larger future goals.
Provided a concrete, quantifiable roadmap that moves the conversation from aspirational statements to specific deliverables and timelines.
Answered many earlier questions about implementation, gave journalists concrete data to report, and set the agenda for the next phase of India’s AI strategy.
Speaker: Ashwini Vaishnaw
Discussion of the PAX Silica semiconductor initiative and positioning India as a trusted partner in the global chip supply chain.
Connected AI development to the hardware ecosystem, underscoring the strategic importance of domestic semiconductor capability for AI sovereignty.
Shifted part of the dialogue toward supply‑chain resilience, cost of compute, and long‑term industrial policy, prompting follow‑up questions about chip costs and industry guarantees.
Speaker: Ashwini Vaishnaw
Overall Assessment

The most impactful moments of the discussion were driven by Ashwini Vaishnaw’s articulation of a human‑centric AI vision, the demonstration of massive youth participation, and the announcement of concrete investment figures and the AI Mission 2.0 roadmap. These statements established a narrative of ethical leadership and technical self‑reliance, which in turn provoked probing questions from journalists about governance, enforceability of voluntary commitments, and geopolitical challenges. The back‑and‑forth between the minister’s high‑level assurances and the media’s demand for tangible policy documents created a turning point from celebratory rhetoric to a deeper examination of implementation mechanisms. Collectively, these key comments steered the conversation toward measurable targets, highlighted India’s emerging role in the global AI ecosystem, and framed the summit’s outcomes as both a diplomatic achievement and a roadmap for future action.

Follow-up Questions
How does India plan to develop and promote day‑to‑day Python development tools, and what observation or indication did the Prime Minister give regarding the AI Impact Summit?
Clarifies India’s role in democratizing Python education and seeks the Prime Minister’s specific comments on the summit’s impact.
Speaker: Nishant Ketu (ANI)
Have the participating countries agreed on specific guidelines or guardrails for ethical, responsible AI, and is there a draft paper or blueprint that can be shared?
Seeks a concrete document outlining international consensus on AI ethics, essential for policy alignment.
Speaker: Deepak Ajwani (Economic Times)
What discussions have taken place with major global tech companies about their role in delivering public services in India?
Understanding the commitments of big tech to public sector AI applications informs future collaborations.
Speaker: Shauvik (Mint)
What are the key takeaways regarding the AI models launched under the AI Mission, and what is the roadmap for their further development?
Requests details on model performance, deployment plans, and future milestones critical for the AI ecosystem.
Speaker: Shauvik (Mint)
Given that frontier AI commitments are voluntary and the Delhi Declaration is non‑binding, what mechanisms will ensure their implementation?
Addresses concerns about enforceability of pledges, crucial for translating commitments into action.
Speaker: Oyeek (Money Control)
What will be the primary focus area of the upcoming declaration compared to previous summits?
Identifies the thematic priority of the new declaration, guiding stakeholders on expected outcomes.
Speaker: Ashish (Business Standard)
Which topics achieved consensus easily in previous AI summits and which required extensive negotiation?
Insights into consensus‑building help anticipate future negotiation challenges.
Speaker: Shubhan (Economic Times)
What is the timeline and strategy for ensuring AI benefits reach the ‘last person’ in India?
Seeks concrete plans for inclusive diffusion of AI technologies across all demographics.
Speaker: Unidentified audience member (last person standing)
Can policy interventions help balance advertising revenue between traditional media (TV, radio, print) and digital platforms, and what is the current status of the TRP guidelines overhaul?
Explores regulatory solutions for media revenue equity and the progress of pending TRP reforms.
Speaker: Lalit (Best Media Info)
How will the insights and sessions from the summit be translated to grassroots levels to benefit the common citizen?
Focuses on dissemination strategies to ensure summit outcomes impact everyday life.
Speaker: Prashant (AsiaNet News)
Which countries have already signed the declaration? Please provide a few examples.
Requests transparency on international participation and support for the declaration.
Speaker: Sejal Sharma (Hindustan Times)
What are the outcomes of each of the seven working groups formed before the summit, how has India’s leadership of the Global South materialized, and what discussions occurred on the SGI amendments (compliance deadline, takedown window, provenance)?
Seeks comprehensive results from working groups, assessment of Global South engagement, and details on SGI regulatory discussions.
Speaker: Independent journalist (unspecified)
Has the objective of a technological framework been achieved, how many countries have adopted it, what is the reaction of big tech, and is India at risk of becoming merely a data and talent supplier?
Evaluates progress on tech framework, global adoption, industry response, and strategic positioning of India.
Speaker: Manas (Times of India)
What are the contours, focus, and consensus areas of the New Delhi Declaration, and how will it benefit Indians?
Requests detailed content of the declaration and its direct implications for Indian stakeholders.
Speaker: Momita (PTI)
Has there been any consensus reached regarding a social‑media framework for users under the age of 15, as urged by the French President?
Looks for international agreement on protecting minors online, a key policy area.
Speaker: Momita (Outlook Business)
What role will state governments play in implementing the AI mission and collaborating with the central government?
Clarifies the coordination mechanism between central and state authorities for AI initiatives.
Speaker: Himanshu Desai (Rajasthan Patrika)
What steps is the government taking on data protection in light of concerns about AI platforms like OpenAI and Microsoft accessing user data?
Addresses privacy safeguards essential for public trust in AI services.
Speaker: Yaku Tali (DLU Hindi)
Will there be a dedicated AI training module or curriculum for school children (ages 5‑10) to build AI skills, and what plans exist for its rollout?
Seeks educational initiatives to build AI literacy from an early age.
Speaker: Sandeep (Prabhat Khabar)
Does the government believe Artificial General Intelligence (AGI) could emerge within the next two years, and is India prepared for it? Additionally, will future AI summits be held again, such as in Switzerland?
Probes strategic foresight on AGI timelines, preparedness, and continuity of international AI forums.
Speaker: Arundeep (The Hindu)
How has India’s approach to AI democratization (e.g., UBI, open‑source models) been received globally, and what are the plans for further democratization?
Evaluates global perception of India’s open‑source AI initiatives and future democratization strategies.
Speaker: Unidentified audience member (economic times)
Regarding the semiconductor supply chain, what assurances or gains does India have under the PAC silica arrangement, and will the government provide viability‑gap funding for socially beneficial AI applications lacking immediate ROI under Mission 2.0?
Seeks concrete benefits from semiconductor partnerships and funding mechanisms for socially oriented AI projects.
Speaker: Ashmit (CNBC TV18)
What will be the main focus areas of AI Mission 2.0, and what key discussion points or asks have emerged from meetings with leading global AI companies?
Aims to outline the next phase priorities and understand expectations from major AI players.
Speaker: Surabhi (Economic Times)
How does the government respond to the U.S. delegation’s rejection of global AI governance, and does this stance conflict with the summit’s goal of unified AI governance?
Addresses diplomatic tension and the broader vision for international AI governance.
Speaker: Arunodai Mukherjee (BBC)
In light of the IMF’s statement on AI‑driven growth, how is the government preparing for macro‑economic impacts, and how will accountability for deep‑fakes be ensured without stifling startup innovation?
Combines macro‑policy preparation with regulatory balance for emerging AI risks.
Speaker: Amrit Pal (DD India)
Is there consensus on defining ‘high‑risk AI’, and will the demarcation be decided internationally or left to individual national governments?
Seeks clarity on risk classification standards critical for regulation.
Speaker: Unidentified audience member
Will the priorities of the Global South be reflected in the joint AI statement, and what are the major takeaways for Global South countries?
Ensures that the summit’s outcomes address the needs and interests of developing nations.
Speaker: Unidentified audience member (global south focus)
Are there discussions to create a G20‑like binding agreement group for AI, and what legal framework is being considered to address AI‑related cybercrime?
Explores the formation of enforceable international AI agreements and legal tools against AI‑enabled crimes.
Speaker: Jatin Grover (Mint)
When will the remaining nine foundation models be launched, and what are the agreed terms regarding government equity or other benefits in the partnerships with the model developers?
Requests timeline and financial terms for the full suite of foundational AI models.
Speaker: Economic Times (unspecified reporter)
What are the key lessons the government has learned from the AI Impact Summit, both in terms of large‑scale project execution and technology adoption?
Seeks reflective insights to improve future AI policy and implementation.
Speaker: Shreyas Bharadwaj (IIM/IIT Indore)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2

Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined how the rapid growth of artificial intelligence is creating unprecedented power and cooling demands for data centers worldwide, noting that a single large AI training run can consume as much electricity as thousands of homes in a year and that data centers and cooling are the two biggest sources of rising electricity consumption [5][56]. Ashish Khanna outlined the International Solar Alliance’s dual focus on “AI for Energy” – using AI to integrate decentralized solar, storage and peer-to-peer trading – and “energy for AI,” which addresses the surge in electricity use by data centers and cooling systems [27-33][54-57], emphasizing that about 40 % of recent solar growth is decentralized but often resisted by distribution companies, a gap AI-enabled digitisation could help bridge [21-25].


Professor Raghav Chandra argued that the single greatest constraint on AI’s future is the energy required by data centers, not algorithms or chips [73-74]. He cited high-profile outages at Meta, Google Cloud, AWS and Azure as evidence that power reliability is a critical vulnerability for AI services [75-85][86-94]. Current global data-center electricity consumption is about 415 TWh (1.5 % of total) and is projected to rise to roughly 945 TWh (3 %) by 2030, effectively adding the power demand of whole countries [113-119]. Chandra warned that reliance on fossil-fuel generation would increase emissions, raise electricity prices for nearby communities, and create social and environmental costs such as water scarcity, noise and land-use conflicts [121-128][133-136].


Nathan Blom highlighted that cooling innovation is moving from traditional air-cooled racks to liquid-cooling and emerging two-phase technologies, which can improve power-usage-effectiveness from around 1.5 to 0.5, dramatically reducing the electricity needed for heat removal [244-265].


Vineet Mittal described how AI can make intermittent solar and wind generation dispatchable at 15-minute intervals by processing climatic, satellite and grid data, enabling an “always-on” clean-power grid [148-156]. He emphasized India’s rapid expansion to 50 GW of new solar-wind capacity this year, its abundant sun, wind and pumped-storage resources, and a single, heavily invested national grid that can deliver power across the country in real time [160-168][188-194]. Mittal also pointed to policy measures such as tax exemptions for foreign-collaborative data centers and the need for data-sovereignty legislation, while acknowledging uneven ease of doing business and coordination between central and state authorities as major hurdles [227-233][224-231].


The discussion concluded with consensus that India and other developing regions have significant renewable potential and cooling-technology opportunities, but that coordinated regulation, infrastructure investment and sustained innovation are essential to meet AI’s energy needs sustainably [242][312-313].


Keypoints


Major discussion points


The massive and growing energy demand of AI-driven data centers, and the reliability and environmental risks this creates.


The opening remarks note that a single AI training run can use as much electricity as “thousands of homes” [5-6] and that data-center power consumption is already comparable to a small country’s grid [55-57]. Raghav Chandra underscores recent high-profile outages at Meta, Google, AWS and Microsoft as evidence that “energy reliability … is a big-time issue” [73-80][84-92]. He quantifies the scale – global data-center electricity use is 415 TWh today and could reach 945 TWh by 2030, equivalent to the power demand of entire nations such as Australia or Spain [112-119]. He also warns of the downstream social, economic and climate costs of relying on fossil-fuel generation [124-129][133-137].


AI as an enabler for renewable-energy integration and decentralized power markets.


Ashish Khanna explains that the International Solar Alliance (ISA) has launched an “AI for Energy” mission, emphasizing that AI can digitise millions of prosumers to enable peer-to-peer (P2P) trading of rooftop solar and storage [20-27][31-36]. He highlights the current skill gap – “AI engineers do not understand energy, energy engineers do not understand AI” – and announces the creation of an ISI Academy to train hybrid talent [33-36]. He also points to a burgeoning innovation ecosystem of startups tackling generation, transmission and financing challenges [38-44][46-48].


India’s unique renewable-energy endowment and its strategic vision to become a global data-center hub.


Vineet Mittal describes India’s rapid expansion to 50 GW of solar and wind this year, its “abundance of sun, wind and water,” and the ability to pair these with pumped-storage and battery systems to deliver round-the-clock green power [148-166][170-188]. He stresses the country’s single, highly-interconnected grid that can move power from Rajasthan to Mumbai in real time, and the policy environment that offers tax exemptions for foreign-collaborative data-center projects [226-228][282-285]. Raghav adds that India’s data-center load could rise from ~1 GW today to 8-9 GW by 2030, but that “ease of doing business” and state-center coordination remain the biggest bottlenecks [214-222][224-233].


Innovation in cooling technologies as a critical lever for energy efficiency.


Nathan Blom argues that the next breakthrough will come from “small companies” developing advanced cooling, moving from traditional air-cooling to liquid-cooling and, more importantly, two-phase (boiling) cooling that can improve PUE from ~1.5 to ~1.05 [244-265]. Vineet echoes this, noting that cross-industry expertise (clean-room design, battery cooling, PUE optimisation) must be cultivated locally, and that India’s open-access grid enables flexible, real-time power for such high-efficiency cooling solutions [267-280][281-284].


Policy, regulatory and coordination challenges that must be addressed to scale AI-powered data centres.


Ashish asks the panel to consider how “policy and regulatory landscape” and “innovation landscape” can accelerate data-center deployment [198-212]. Raghav points to fragmented state-center governance, the need for synchronized permitting, and recent budget measures such as tax exemptions for data centres with foreign components [224-230][226-228]. Vineet adds that while some states (e.g., Maharashtra) are streamlining land and permitting, a “stack-ranking” of states is being introduced to ensure uniformity, and that data-localisation policies must be communicated to both industry and government [282-285][286-287].


Overall purpose / goal of the discussion


The session was convened to examine the twin challenges of “energy for AI” (the soaring power and cooling needs of AI-driven data centres) and “AI for energy” (how artificial intelligence can enable more efficient, decentralized renewable-energy systems). Panelists aimed to identify technical, regulatory and market solutions-particularly for developing economies like India-so that the rapid expansion of AI can be sustained without compromising reliability, affordability or the environment.


Tone of the discussion


– The conversation opens with a formal, urgent tone, emphasizing the scale of the problem (Announcer, Ashish) and citing recent outages.


– It then shifts to a constructive, optimistic tone, highlighting opportunities where AI can unlock renewable integration and where India’s resource base offers a strategic advantage.


– When addressing policy and implementation, the tone becomes candid and critical, acknowledging “ease of doing business” bottlenecks and coordination gaps.


– Finally, the panel ends on a hopeful, forward-looking tone, stressing innovation, ecosystem building and the potential for India to become a leading AI-data-center hub.


Overall, the tone moves from alarm to optimism, tempered by realistic acknowledgment of the work still required.


Speakers

Announcer


– Role/Title: Event announcer/moderator


– Area of Expertise:


Vineet Mittal


– Role/Title: Chairman of Avada Group, renewable energy developer


– Area of Expertise: Renewable energy, AI for energy, power grid integration [S4]


Nathan Blom


– Role/Title: Vice President, Cooling Chambers


– Area of Expertise: Data center cooling technologies, liquid and two-phase cooling solutions [S6]


Ashish Khanna


– Role/Title: Director General, International Solar Alliance (moderator)


– Area of Expertise: Solar energy, AI for Energy, international energy policy [S7]


Raghav Chandra


– Role/Title: Professor, IIM Calcutta; Founder & CEO, Consult; former Chairman, NHAI; former Secretary to Government of India


– Area of Expertise: Infrastructure policy, energy systems, AI impact on power demand [S9]


Audience


– Role/Title: Associate Member, Indian Institute of Public Administration (Umesh Prasad Singh)


– Area of Expertise: Public administration, policy analysis [S12]


Additional speakers:


(None identified beyond the listed speakers)


Full session reportComprehensive analysis and detailed insights

The session opened with the moderator framing the debate. He warned that AI is expanding at “speed”, driving “unprecedented power and cooling requirements” for data-centres, which today consume electricity equivalent to Spain’s grid [13-15] and represent roughly 70 % of global data-centre demand in the United States and China [10-12]. This demand is projected to double every three years [16-18] and is further amplified by the electrification of cars/EVs [19-20]. The moderator highlighted the recent surge in solar capacity – 1 000 GW added in the last two years, a pace that previously took 25 years, with about 40 % of that growth being decentralised (rooftop, pumps, etc.) [20-22]. He noted that distribution companies often resist decentralised solar because it threatens revenue streams [23-25], but AI-enabled digitisation could help integrate these resources and lower system costs [25-26]. The International Solar Alliance (ISA) announced the launch of a global “AI for Energy” mission as part of the AI Impact Summit [21-23], and called for interoperable standards and new financing and de-risking models to enable large-scale projects [38-45][46-53][54-57].


Raghav Chandra then argued that the single greatest constraint on AI’s future is the energy required by AI-driven data-centres [73-74][75-94]. He cited high-profile power failures – Meta’s aborted nuclear-powered data-centre after a bee-colony incident, Google Cloud’s 2025 outage in Columbus, AWS’s 2019 failure in Northern Virginia, and similar setbacks at Microsoft Azure and TikTok’s US-DS joint venture [75-94]. He quantified current global data-centre electricity consumption at 415 TWh (≈1.5 % of world electricity) and warned it could rise to 945 TWh (≈3 %) by 2030, comparable to the demand of entire nations [112-119]. He outlined the broader environmental, social and equity ramifications – higher emissions, rising electricity prices for households, noise, heat, land-use conflicts and water scarcity [121-128][133-136], and answered an audience question on the global impacts, emphasizing water-use, social equity and climate-justice concerns [198-212].


Nathan Blom shifted the focus to cooling innovation. He traced the evolution from traditional air-cooled racks to liquid-cooling and, most importantly, emerging two-phase cooling where the coolant boils, delivering ten-to-twenty times higher heat-removal efficiency [259-264][260-262]. This technology can improve Power Usage Effectiveness from around 1.5 to as low as 1.05, dramatically cutting the electricity needed for heat removal and the overall power draw of data-centres [260-265]. Blom stressed that such breakthroughs are typically driven by small, agile startups that are later acquired by larger firms [247-248][267-270].


Vineet Mittal presented both “AI for Energy” and “energy for AI”. He explained that AI can schedule solar and wind output in 15-minute intervals, making intermittent renewables effectively dispatchable [151-156]. India is adding 50 GW of solar and wind capacity this year, positioning it as the world’s second-largest green-energy player after China [160-162]; the complementarity of solar and wind, together with pumped-storage and batteries providing 14-18 hours of power, enables round-the-clock green electricity [165-170][173-176]. India’s single, heavily-invested national grid can transmit power in real time from Rajasthan to Mumbai, supporting low-latency data-centre operation [188-194]. Policy levers such as recent budget tax exemptions for data-centres with foreign components [226-228] and open-access, real-time power trading were highlighted, though Mittal noted uneven “ease of doing business” across states [282-285]. Some states (e.g., Maharashtra) already offer streamlined land allocation, permitting and incentives, and the government’s “stack-ranking” of states aims to level the playing field [242-245][242-244]. A forthcoming national data-sovereignty act is also expected to shape investment [250-252].


In the policy and regulatory discussion, Raghav Chandra identified systemic bottlenecks: fragmented centre-state coordination, inconsistent permitting, and a lack of synergy among government departments [214-224]. He praised recent budget provisions offering tax exemptions for data-centres with foreign components [226-228] but warned that without synchronized policies and streamlined approvals projects can stall after multiple presentations [237-241]. Vineet Mittal offered a more optimistic view, citing states that already provide streamlined processes and the “stack-ranking” mechanism [242-245][267-274]. Both panelists agreed that new financing and de-risking mechanisms are needed, especially for regions lacking venture-capital ecosystems [41-44][45-48].


The audience asked about the global ramifications of AI-driven data-centre growth. Raghav Chandra answered that beyond emissions, the expansion will intensify water-use pressures, exacerbate social inequities and create cross-border environmental externalities, underscoring the need for coordinated international policy and standards [198-212][121-128][133-136].


Key take-aways


1. AI’s rapid expansion will push data-centre electricity use from ~1.5 % to ~3 % of global consumption by 2030, with attendant environmental and social costs.


2. Energy reliability is the greatest constraint on AI, as illustrated by recent high-profile outages.


3. AI-driven forecasting can schedule solar and wind output in 15-minute intervals, making renewables effectively dispatchable.


4. India’s strategic advantages – abundant renewables, a single national grid, low per-capita consumption [165-166], and a large AI talent pool – position it to become a global hub for gigawatt-scale green data-centres.


5. Advanced cooling, especially two-phase systems, can cut PUE from ~1.5 to ~1.05, dramatically reducing overall power demand.


6. Coordinated policy, tax incentives, streamlined permitting, and a national data-sovereignty act are essential to attract investment.


7. ISA’s AI-for-Energy mission and the planned ISI Academy aim to build the hybrid skill set needed for this transition.


8. New financing and de-risking models, as well as interoperable standards for AI-energy integration, remain critical open challenges.


The moderator concluded with a series of follow-up questions to guide future work, covering policy landscapes in developing countries, the composition of the cooling-innovation ecosystem, global standards, financing models, data-sovereignty legislation, centre-state coordination in India, water-scarcity-aware cooling, commercialization pathways for two-phase cooling, social and environmental impact assessments, AI-optimised renewable dispatch for AI workloads, peer-to-peer power-trading platforms, energy-efficient AI models and hardware, backup-power reliability, projected global electricity and carbon impacts by 2030, integration of pumped storage with AI, and transfer of gaming-industry cooling advances to data-centres [198-212][214-224][226-233][242-245][267-274][280-283][297-304][312-313].


Session transcriptComplete transcript of the session
Announcer

Good evening, distinguished guests. Welcome to the session on powering AI. As AI scales at speed, so does its infrastructure demands. Data centers are facing unprecedented power and cooling requirements. A single large AI training run can consume as much electricity as thousands of homes use in a year. This raises critical questions like how do we plan for rapidly rising and uncertainty energy demand? Can edge computing reduce the load, or is centralization inevitable? To address these critical issues, we are joined by our exceptional panelists. Mr. Vineet Mittal, Chairman of Avada Group. Sir, I request you to please come on stage. Mr. Natham Blom, Vice President, Cooling Chambers. Professor Raghav Chandra, Professor at IIM Calcutta, Founder and CEO of Consult and former Chairman of NHAI and Secretary to Government of India.

Moderating this important conversation is Mr. Ashish Khanna, Director General of the International Solar Alliance. Mr. Ashish Khanna, Director General of the International Solar Alliance. Thank you panelists for being here with that I now request Mr. Khanna to please take the discussion forward

Ashish Khanna

Good evening everyone not easy being the last panel especially when we are probably starting at the time that we are supposed to end but we hope and we will try and make it more interesting for all of you we are here to talk about Powering AI the format will be that I will begin in terms of framing some of the issues at heart and also tell you a little bit about what International Solar Alliance is going to do then I will hand over to each of the esteemed panelists to make an opening kind of a statement of what’s their vision on this question of Powering AI for about 5 minutes each and then I will ask them one question each and then I will ask the question and then I will ask the question on some of the specific issues for which they are probably an expert on.

And finally, if there is any time left, we will see if any audience member wants to ask a question. Let me start off by saying, why is International Solar Alliance in this session and in this AI Impact Summit? We are here primarily for two reasons. The first reason is, the world has done 1000 gigawatt of solar doubled in just last two years, what was done in last 25 years. Almost 40 % of that is decentralized, which means it’s either solar rooftop or pump or others. That figure is only 15 or 20 % in India and obviously very low in a lot of developing countries. And a distribution company often does not like decentralized solar because it impacts the distribution. It impacts the distribution system and finances.

But the right amount of digitization and AI can actually help them absorb it and reduce the cost of the system as a whole. And therefore, India’s ability to more than double decentralize renewable energy, but in general, world over, will require AI. That’s issue number one, for which actually I will say that we launched a global AI mission for energy in the AI Impact Summit. We call it AI for Energy. The session is going to talk about energy for meeting AI demands. But let’s first talk about AI for Energy. Why? Because there are some elements that the world has not seen, which is, if some of you were part of some sessions earlier, can consumers trade power based on what rooftop and batteries do you have, P2P trading, that requires certain digital enablement of the trade of millions of consumers, producers and consumers, that right now needs a lot of regulatory evolvement.

An IT architecture, so that each distribution company in India, but for that, that matter anywhere in the world will know what will it make ready to actually trade that power. all. It’s about jobs. Today, a lot of AI engineers do not understand energy. Energy engineers do not understand AI. We at International Solar Alliance, which is now 125 country member body, headquartered in India, is creating an ISI Academy to train people to bring together AI and energy skills engineers together. This intersection of energy and AI will be the fundamental shift over next five years, the way Amazon changed retail. This is what is going to happen, we believe, in renewable energy. Third, is about innovation ecosystem. We are in the AI Summit.

A lot of startups are having fundamentally disruptive ideas on both decentralized renewable energy, as well as the way you manage generation, transmission, and others. The fourth is about financing. How will all this financing and de -risking be done? Because not all places have a lot of venture capital or commercial loans and equity possible. We are in the process of creating a new industry. We are in the process of creating a new industry. We are in the process of creating a new industry. We are in the process of creating a new industry. We are in the process of creating a new industry. We are in the process of creating a new industry. And finally, there’s a global dimension where International Solar Alliance is involved.

What are going to be the interoperable standards? Because the world is not united on how all of this will be done. So that’s a lot about AI for energy. But there’s also an equally important element of energy for AI. The world’s largest sources of increase in electricity consumption right now are only two. Data centers and cooling. Some of it is going to happen through electrification of cars, EVs as well. Now, 70 % of all data center demand today is US and China. But in times to come, it’s increasing by almost more than 50%. A lot of it is going to happen in developing countries. And we’ll hear some of that in addition to global elements. A lot of that is also having a lot of innovation that will renewable energy provide that energy.

Can 24 by 7 solar and storage provide cost -competitive energy to some of these data centers, whether they’re small or hyperscale, hyperscale being above 100 megawatt? What’s happening on innovation on cooling? We will hear some of the experts on the private sector who are trying to come out with a lot of innovation on that and what happens on the ecosystem. Obviously, today’s data centers are consuming a grid equal to Spain right now, and it’s going to double every three years. So this is a very important segment. Without further ado, I’m actually going to go probably to the esteemed panel. I’m going to request Mr. Raghav Chandra. Sir, you have been part of the government and now teaching. When you look at this big…

element of powering AI, how do you see it?

Raghav Chandra

Thank you, Ashish. You’ve done a fantastic job in a short time period covering the larger macro issues connected with this sector. Friends, as we gather here in a nation racing towards digital sovereignty and sustainable growth, I want to emphasize and putting on my academic professorial hat, the single greatest constraint on AI’s future, which is not algorithms, not chips, but it is energy for AI -based data centers. And, you know, I’m going to mention a few such instances. In late 2024, Mark Zuckerberg made a confession that stunned his employees. A nesting colony of bees had torpedoed. Meta’s plans to open the world’s first nuclear -powered AI data center. That single environmental snag exposed their deeper vulnerability that Meta’s AI strategy depended on a single resource that it did not control and command, which is electricity.

Power outages and energy shortages have increasingly disrupted major tech companies’ operations, particularly as AI -driven data center demands strains global grids. There has been another very famous incident of March 29, 2025. A sudden loss of utility power in Google Cloud’s Columbus, Ohio unit triggered a critical failure in the uninterruptible power supply UPS batteries that created a major havoc for society. Several hours. This caused a cascading outage of six hours, in fact. Over 20 services were hit. Various customers experienced degraded performance or total unavailability, affecting cloud -dependent apps and websites globally. No direct apology was, of course, issued, but the event underscored energy reliability in a big way in an era of AI growth. In September 2019, utility power failed at one data center in Amazon Web Services, AWS’s North Virginia zone.

Backup generators activated but ran out of fuel after about an hour due to faulty automated refueling systems exacerbating the blackout. And it affected about 7 .5 % of the volume of… Apps and databases and some customers lost data permanently. Backups weren’t in place. Services like Slack and Netflix saw major ripples. And this has happened not only with Google Cloud or with Amazon AWS, but it has happened with companies such as Microsoft’s Azure, which suffered a major setback in 2018. It has affected TikTok, that’s ByteDance’s new USDS joint venture, causing widespread system failures. And what it underscores is the need for ensuring that there is suitable energy availability for data centers and that there is suitable backup for data centers.

Otherwise, you will not be able to have high powered. So energy guzzling, AI based data centers, which are the basic, basic. unit for AI to be implemented across the board for simplifying and for making and achieving our goals of ensuring that we have AI which is responsible, ethical, efficient, and which can do our job effectively. There is one county in the U .S. which my friends here on the dais would be aware of, of Ludon County, Virginia, just outside Washington, D .C., where data centers now outnumber people in density. And this 40 -square -kilometer area of computer server farms is Christend, the data center capital of the world. It hosts about 200 operational facilities, and another 100 or so are coming up.

Their peak draw is nearly 3 gigawatts. That’s enough to power a small country. Over 70 percent of the global Internet traffic passes. This is the clear area. What brought Ludon and its implications to the world’s notice was the massive outage at Amazon, causing tripping of crucial banking services and various social media companies. In Ireland, their data centers consume already one -fifth of the nation’s electricity, more than all the urban homes combined. Data centers traditionally began as largely in -house centers for proprietary computing data storage. They have since evolved, and today they are largely remote facilities or networks of facilities owned by cloud service providers, housing virtualized infrastructure for the shared use of multiple companies and customers. They need tons of electricity.

With all the power -hungry hardware and cooling systems, a data center today uses, higher -density racks, and whereas earlier… the data center typically used something like 150 to 300 watts of electricity per square foot. Today these higher density racks can consume as much as 100 kilowatts per cabinet which equates to 10 ,000 watts per square foot. And therefore a data center power problem can have global ramifications for the company. AI is supercharging data center boom that will recharge global energy systems. Global data center electricity consumption today is 415 terawatt hours. That’s about 1 .5 % of the world’s total consumption of electricity. And by 2030 it’s predicted to be nearly 945 terawatts or 3 % of the total consumption. So AI is not a side story. It’s the main driver with accelerated servers growing 30 % annually.

in the United States, which is the current epicenter. Data centers use 176 terawatts in 2023, or 4 .4 % of the national electricity. Projections are staggering. That’s like adding, in fact, the entire power demand of countries like Australia or Spain. So, you know, when we look at powering AI, we have to look not just at the upstream issues of creating the requisite demand, of creating the requisite power supply, but the other factors which come into play are the downstream effects and the hidden costs of progress. Environmental, if we rely only on fossil fuels to bridge the gap, emissions soar, so you have the debate between thermal and between renewable. which my colleague here will talk about. Big tech’s scope to emissions are already up 30 to 50 % since 2020.

Globally, data centers could claim 40 % of new fossil generation if clean supply lags. And so, while on the one hand, AI can help to accelerate decarbonization through optimal strategies and with intelligent working, but at the same time, the very fact that they are power guzzlers, they have an environmental issues which is inherent, and therefore there is a need for choosing a virtuous path. They have economic and social costs, while power prices are spiking. For instance, in the US, in and around areas which have data centers, the power cost has gone up significantly. In fact, wholesale electricity has jumped 200 to 250%. . in five years in certain areas, and the households are feeling that pinch. There is an issue of reliability.

Grids weren’t built for this. Voltage swings in Virginia have already tripped dozens of centers. In a warming world with rising AC loads, blackouts aren’t theoretical. They’re a governance failure waiting to happen. You have the equity issue. Who bears the burden? Communities near data centers face noise, heat, and land -use conflicts. In developing nations such as India, the digital divide widens if energy access for AI crowds out basic needs. So there’s a need for ingenuity when we’re dealing with this issue, and efficiency has to be the best weapon for dealing with the larger social, environmental, and other issues connected with this. And, of course, you know, in India, a lot is happening about which we’ll talk about.

But there is, indeed, a moment of great… happiness that AI is powering us, but there is also need to be concerned about whether we will be able to power AI effectively and whether we will be able to effectively and efficiently manage the downstream effects of powering that AI effectively. Thank you.

Ashish Khanna

Thank you so much, Raghavji, for the different elements of sustainability risks for the society. Nathan, your opening statement especially from the cooling perspective.

Nathan Blom

that keeps these northern Virginia, as an example, data centers from adapting to more efficient and effective technologies. But when you’re starting with new builds, with white space technologies, you have the opportunity to actually build for the future instead of build for the past. And so that, to me, is the most important element as to how we’re going to solve powering AI in the future.

Ashish Khanna

Thank you, Nathan. I’m sure you educated a lot of us in terms of what’s really happening on the cooling side on the innovation. Vineet, over to you, that from one of the leading renewable energy developers, how do you see?

Vineet Mittal

Good evening, everyone. So I see AI as one of the biggest opportunity. For the renewable sector, historically, people believe that renewable is intermittent, which it is. It is difficult to predict when the sun shines and wind blows. So we needed the technology which can help us intermittent power dispatchable at 15 minutes interval so that the grid can operate in a stable environment. So what AI has helped that with the help of a lot of climatic data, which your weather department collects, company like renewable companies are collecting, defense department collects. And then you can get real time data from low earth orbit satellites. If you use all of them in the right way, you are able to predict using AI that what would be my generation like.

And then you go a step further and you can schedule and dispatch that power like a conventional thermal power would do. So that makes AI for energy and energy for AI. And that empowers the grid to have always on. Clean power. which is the uniqueness India offers. So let me tell you, friends, when India started adding solar and wind some 15 -16 years ago, we didn’t even have 5 megawatt of operational asset. And this year alone, India is going to add 50 ,000 megawatt of solar and wind capacity, making us the second largest green energy player besides China. And what it gives power to India is that, like the previous panelists were saying, in the U .S., in Malaysia, even in Ireland, which used to be the data center capital, every country started charging some surcharge on powering the data center.

But the reality of life is that there are not going to be 50 megawatt or 100 megawatt data center. Now we are talking about 500. 500 megawatt, gigawatt data center because the… compute requires so much of eating as Nathan has explained. Without impacting the society and affordably if you have to do, India is the place. And the reason I say that we are blessed with abundance of sun, wind and water. So using the pumped storage because of our geography, we are actually getting a natural ability to do storage. And largely in most of the states, sun and wind are complementary in nature. So what happens is using sun and wind alone, you can generate 14 to 18 hours of power and then you complement it with pumped storage and battery.

And if you combine with the AI and you build your AI stack properly, you are looking for round the clock green power. So India is the perfect location India is adding 50 gigawatt It’s not competing with the normal consumer. India has a lot of very good policy where using green power, they are able to move even farming activity from the night shift to the day shift. So and our per capita power consumption is one of the lowest in the world. We are less than 1500 kilowatt hour per year per capita. So if India has to become a Vixit Bharat, you can’t become Vixit without data. And data is the new oil. And unfortunately, what is happening today is that we are we have 1 .4 billion people and out of which a billion people are connected.

And we are one of the cheapest data connectivity package in the world. So we are. The largest user of YouTube in the world, almost 700 million. user on the YouTube is from India and we are the largest content creating economy whether you take Insta, whether you take YouTube, you take any social media. It’s a repeat story even on WhatsApp we are more than half billion user. And all of this data as previous speaker was saying resides in some other countries. So why should we generate so much of data and the data should reside in any other country because probably earlier we didn’t focus to use all this abundance of energy and power that data center and now today the scenario is it makes economic sense in US.

Now you cannot get any power before 2030. All even the gas machines are sold out. So if you look at the grid waiting time in the US is typically 7 to 8 years. Permitting you can get during that time but if the world has to adopt TI at a massive scale India offers that opportunity where we can set up multiple gigawatt data center. We can provide them green power using solar, wind and storage. And we actually have a very unique situation. Unlike US or Europe, India has a single grid. You can insert the power in Rajasthan and can pick up in Mumbai in real time basis. And India has invested heavily into the grid. And we continue to grow that national grid where the whole country is connected.

So the best location for solar, best location for wind, best location for pumped storage and battery can bring power to the data centers in Mumbai, Chennai, which are already connected. So the latency does not become the bottom line and it becomes the ideal choice. What is needed is probably more data sovereignty type of act. Indian user content has to be located in India by certain time frame and so that developers can plan for the grid they can plan for the large data center capacity and can bring that to light so it’s one of the greatest opportunity for India Indian ecosystem is purely geared up for that and on top of that we have if you look at even in the AI probably more than 25 % of talent resides in India and that talent currently is working for other countries so they will be based in India, work for India and provide services and intelligence to the rest of the world and that’s the way moving forward

Ashish Khanna

Great so let’s have a little bit of a discussion and I do hope we get time for one or two questions so there was a little bit of questions that, Raghavji, you talked about, but a lot of optimism on both sides. I will ask a question combining two elements, which are important. One of it relates to the whole policy and regulatory landscape. Is India, or for that matter, developing world’s policy and regulatory landscape conducive for promoting data centers? I think, Vineet, you talked about the importance of data, the policy and regulatory landscape related to data sovereignty. Even Africa, I remember, was thinking of having a legislation like Europe, where the data for that particular continent or that particular country should be within that region.

But then there is also a policy and regulatory landscape for discovering price of power for data centers. India believes it’s very competitive. U .S. is struggling with the cost of providing power. Power probably is a limiting factor rather than the Nvidia chips. So that’s where the U .S. is. The second element is on innovation. I think you spoke about it, Nathan, but we’d like to hear what would an innovation landscape for cooling look like? Is it a lot of startups? Is it a lot of some of the larger companies doing some process efficiency? What would this innovation landscape look like? I want to request each of you to think about and say what would a policy and regulatory landscape change and an innovation landscape change accelerate both the speed and the cost of what meeting the demands of data centers look like?

Raghav.

Raghav Chandra

So in the Indian context, you know, the stakes for us, as Vineet mentioned, with all the opportunity and the resources that are available to us in terms of land, in terms of water, in terms of the skilled manpower, the opportunities are enormous. And the data center capacity is all set to explode. Today, it consumes about one gigawatt of power today. We are expecting it to reach about eight or nine gigawatts by 2030. And it’s continuously growing. We have ambitious states like Andhra Pradesh, which can effectively be called the data cities or data states for the country. We have a coal -dominated grid, which India has fortunately allowed to continue in a very pragmatic way. We have rising cooling needs from extreme heat.

And as Nathan mentioned, that some of our states can have a power usage efficiency or effectiveness, which can be extraordinary. Because of all the heat, whereas ideally it should be. one which is the perfect index and we also have a net zero ambition of ensuring that we have complete renewable focus non -fossil fuel based energy dependency to reach by the year 2070 which is our global commitment which is I think again a very bold and generous commitment of India but the biggest issue that I find in this entire landscape if you ask me is about the ease of doing business in India and I am not being skeptical but having been an administrator who has been a managing director of the state industrial development corporation the state investment corporation the managing director of the road corporation the urban development principal secretary chairman of the national highway authority and various other such positions Now when I sit back and I’m on six company boards, I realize that the biggest bottleneck in India today is the lack of synergy between the states and the center, between the departments of the government and essentially between the states and the center.

And if India has to move forward to achieve this huge target that it has set for itself for ensuring that we become the data center country for the world, that we exploit our entire human resources, that we exploit our land resources, the solar energy that we have, we must have, you know, apart from the regulatory schemes, et cetera, and for the regulatory. On the regulatory side, much is being done. For instance, in the latest budget, we are all aware that how the finance. Minister announced the scheme for ensuring tax exemption for data centers that are set up in India with foreign collaboration for the foreign component part of the investment and for their revenues. Lots happening on the renewable energy front.

Lots happening on the various data centers that are being set up. However, lots needs to be done in terms of getting synchronized coordination, ensuring that the best technologies are brought in. One of the points which Nathan made was about leapfrogging and ensuring that India should capture technologies which other nations have faultily or by mistake adopted. We can certainly skip that and go on to the best technology. Water in the days to come is going to be a very, very big and critical issue. Thank you for India. And therefore, using liquid coolants and solutions such as that for cooling are going to be extremely important. And this has to be realized not only by the central government, by the states, and by everybody who is working in the field that they must facilitate ensuring that these things are adopted in a positive manner.

I had an example of a foreign company which the other day was talking to me. And they had signed an MOU with a particular state government for a huge amount of data centers to be established there. And they said that, you know, we are struggling. We’ve made eight presentations and we haven’t been able to move forward on that. That’s the kind of thing which with the best intentions and with our prime minister being so proactive that we should really have proactive chief ministers, everyone getting down to business, and using the large number of experts who are available all around to explain to them the best technology and moving beyond perhaps even L1 to be able to get the best configurations on the ground to ensure that we are not only efficient but we are effective.

Ashish Khanna

Great. So a lot of potential but work to be done on ease of doing business, center state coordination and also from innovation a big potential for Indian companies to innovate on cooling, liquid cooling especially given water constraints. Nathan, over to you.

Nathan Blom

Yeah, I’ll comment on that innovation because that is innovation is the foundation that the IT industry is built upon and it’s built upon the idea that any one individual or small group of individuals can create an idea that changes the entire multi billion dollar industry itself and those who don’t innovate end up falling off the map. You know, you don’t talk anymore about AOL or Ask Jeeves or companies like that, and maybe we’ll say the same thing about Meta or Microsoft or, you know, Google or Amazon someday. Who knows? Because that’s the nature of the industry. And so as we look into the future, I think the innovation is going to require these smaller companies who are able to take risks and think bigger, especially around these cooling technologies.

And that’s what’s already happening today is we’re seeing people who are thinking outside the box of what we’ve normally considered to be advanced cooling technologies. Today, when we talk about advanced cooling that’s being deployed, what we’re really talking about is moving from that air -cooled ecosystem to just a simple liquid cooling ecosystem, which was developed in the 1960s for the Apollo space mission in the United States by IBM. And it’s been used for all of those years, including in the 1960s. If you’re a gamer at home, it’s been used in those large desktop gaming systems. And so this is an old and proven technology. You basically use… ethylene or propylene glycol mixed with water and you pump it through a pipe and it touches a cold plate on top of the hot chip and it captures it in liquid and moves away.

And that is a very simple and easy way of capturing heat, but it has limits. And what we’re facing is the limit that that liquid, as it leaves the chip, is getting so hot that you then have to have some coolant, some way to cool it back down. And that uses an incredible amount of electricity to cool that water back down, to use chillers on the roof of your data center to chill that water back down. And so the delta between the heated water, glycol, and the chilled water has to continue to get bigger and bigger and bigger, which means you have to cool that water lower and lower and lower using more electricity, so you eliminate the efficiency.

There’s now technologies emerging, and this is what my company is focused on, that is very similar to the way we cool. You can cool air in an air conditioner or in your car or in your refrigerator, and it’s called a two -phase technology, and basically what that means, instead of pumping liquid around… and it’s staying liquid, it actually, the liquid boils and vaporizes and that change of phase that is from a liquid to a gas is 10 to 20 times more effective and efficient at capturing heat. And that technology, though, is being spearheaded by small companies and those small companies will get bought up by large companies and they’ll be adopted into the ecosystem. And so expect to see that.

Expect to see the same basic use of refrigeration or refrigerants that we have today and we’ve been using for a long time, but using them specifically within the IT load of a data center ecosystem. That allows us to get those PUEs, that utilization efficiency ratio, not 1 .5 but 1 .05. You see that, I mean, that’s a massive step function increase in efficiency, which means the power generation doesn’t, doesn’t have to be strained nearly as much. And so I think that’s where the innovation is really going to come in the next three to five years.

Ashish Khanna

great I think on the lighter note I’m always it’s baffled but amusing that the gaming industry was the start of GPU and now the cooling as well it’s fascinating how gaming industry is responsible for the AI revolution but lot of space for small companies if you have on the innovation side Vineet what do you think?

Vineet Mittal

Gaming and best actually because the large batteries requires the same amount of cooling so the way I see innovation happening across the board is when the knowledge and cross industry expertise starts fertilizing and for that to happen you have to start creating local ecosystem see we can’t be sitting on the fence and be solving and innovating consistently that you are doing in theory but when you are building large data center of gigawatt scale you can find solution and use those skill because the similar challenge comes when you design the clean room. So how do you combine the expertise of building a clean room of millions of square foot with the expertise which is required for cooling the batteries and the expertise which is required for power usage efficiency in the data center.

How do you combine those skills and build the solution which is good for India where the humidity in some of the cities where optical fibers are terminating through the sea is large and how do you balance it out. So you have to use the external environmental data also to customize your PUE efficiency. So we see that efficiency is possible at all levels whether the ceiling height should be 6 meters or 8 meters. How close to India. So India is in that sense is fortunate that we are building those expertise locally without being building those expertise locally. building those 100 gigawatt off data center. In Morgan Stanley did a study. There is a $4 million opportunity cost for the power.

So they are saying the battle for the AI is no more compute. And it’s no more intelligence. It’s the power. Power is the biggest challenge. And there is a lot of innovation which is happening on power sector in India. You gave a good example of P2P trading using AI. And the policy in India is quite open on open access. So when I give power to the grid and I’m taking it out, I’m getting the power in the real -time basis, which is very few countries are able to do globally. And we have to account for on the monthly basis. So that gives a flexibility to the data center, which you always want. clean power and they want 24 by 7 365 days reliable power that is what is available in India and I agree with Raghavji ease of doing business is not similar across the states in the country but that’s why government of India is doing stack ranking of the state so today you can’t be just dependent on one state like look at Maharashtra the kind of support they are providing today if you want to build data center is amazing like permitting land everything is fairly streamlined and on top of that they incentivize so I think government has got it not every state is on the same page that if you have to become a developed nation your data is the biggest enabler if you have to win any kind of manufacturing battle data is the biggest enabler like if you look at today even our financial data most of the software companies whether it is Oracle or SAP or Microsoft they want the data to be on the cloud and those data even your financial data now even SAP you can’t do ECC everything goes on HANA on rise which is on the cloud so you buy the space either from AWS or Microsoft because they have only partnership with those two so even the 40 ,000 odd companies which are on these ERP softwares in India where are their data going and so the opportunity wise I think India because of its own need will innovate consistently ease of doing business is a challenge and that’s where there is an opportunity to continuously work with the government on transparently on your challenges and suggesting a solution which is not benefiting one voice is a And then third is understanding the nuances of how the application layer is working across the industry and educating government also that why should they have the data localization initiative.

And I see all of this getting combined and India becoming probably the third largest country where the AI adoption and data center would be one of the enabling block for the future growth. Thank you.

Ashish Khanna

So a lot of optimism. I did promise one question. I have one question space only. So please go ahead. If you can identify yourself and have a brief question.

Audience

My name is Umesh Prasad Singh and I’m an associate member of Indian Institute of Public Administration. Sir, my question is directly to you. In your paper you have mentioned about global ramification. that particular aspect of global ramifications are of both types that is positive and negative. With respect to that, will you just have the clarification on that note? I wanted to know on that.

Raghav Chandra

When I said global ramifications, I’m talking of both essentially the downstream effects of focusing on data centers and the implications it has on the fact that it will have an implication for the environment because they are power guzzling, as Nathan mentioned, that today earlier we had data centers which were full of CPUs, today they are full of GPUs and you’re going into all kinds of even more complex computing units so because of the storage, the networking, they are becoming far more complex. So it’s going to have an impact on the environment because of the heat. That is generated intrinsically because of the data center, because of the environment that will be affected, because when you’re consuming coal to produce that power, you’re using water, that same water which could feed millions of people and pay the, you know, today we are not able to feed enough people for, provide adequate drinking water 24 by 7 to all our cities, yet you would have water effectively being used for the cooling of the data centers.

You will have social issues, because people today already for thermal power plants, they are creating issues where they find that their land, especially in the scheduled areas, et cetera, is being consumed for coal mining, so there are issues connected with that. Likewise, there are all kinds of social and environmental issues that are likely to happen. There are issues on the side of… You know, whether we, you know, what other implications it can have… So all these are things which are not essentially just localized, though they are local problems, but they will affect global companies which can have the benefit of India is that we can leapfrog in terms of technology. And hopefully, as one of the speakers earlier in the previous session mentioned, chips are also becoming more and more efficient.

So, you know, as they become, computing becomes more efficient, the chips become more efficient. So you will require a lesser amount of energy. If we can leapfrog, adopt the best technologies in terms of design and infrastructure, that again will be a great saving. So today, no nation is an island. Everyone is connected. And anything which impacts one nation affects the entire thing because data centers, if they are located here, as I mentioned, the case of the Ludon County outage, it affected billions of people all across. So it has a global ramification. while you have to think of your own benefit you have to keep an eye also on the impact whatever you are doing has on all across the nations and which is why when the Prime Minister talks of Manav, it is the human being who is at the center of it and the human being is not just you it is the larger mankind and the larger human community

Ashish Khanna

Thank you, unfortunately we do not have time for any more questions but it’s pretty late I’m ending without summarizing but it’s pretty apparent huge optimism on the power of India and developing countries to meet the demand for AI both through solar storage, innovation on liquid cooling and of course the ecosystem with ease of doing business please join me in giving a big round of applause to all of them and thank you for staying very late thank you everyone for joining Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (28)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“Data‑centres today consume electricity equivalent to Spain’s grid.”

The knowledge base notes that current data centres consume electricity equivalent to Spain’s entire electricity consumption [S4].

Confirmedhigh

“Around 1 000 GW of solar capacity was added worldwide in the last two years.”

A source states that the world has added roughly 1 000 GW of solar capacity, matching the figure cited in the report [S5].

!
Correctionmedium

“Meta’s aborted nuclear‑powered data‑centre was shut down after a bee‑colony incident.”

The knowledge base reports that Meta is planning to harness nuclear energy for its data centres, but provides no evidence of an aborted project or a bee-colony incident; the claim appears inaccurate [S95].

Additional Contextmedium

“Electrification of cars/EVs is amplifying data‑centre energy demand.”

Policy discussions highlight the rising energy needs of EVs and the need for integrated charging infrastructure, adding nuance to the claim about EVs driving higher electricity demand [S17].

Confirmedmedium

“Power‑consumption concerns are pushing data‑centres toward edge deployment.”

A source explains that power consumption and site-requirements are the main factors encouraging edge-location of data centres [S94].

Additional Contextlow

“Cooling accounts for roughly 40 % of data‑centre power use.”

An expert notes that about 40 % of a data-centre’s power budget goes to cooling, providing additional detail to the discussion of cooling challenges [S19].

Confirmedlow

“The AI Impact Summit 2026 includes global ministerial discussions on inclusive AI development.”

The knowledge base mentions that the AI Impact Summit 2026 hosts global ministerial discussions on AI, confirming the summit’s role [S92].

External Sources (96)
S1
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — -Announcer: Role/Title: Event announcer/moderator; Area of expertise: Not mentioned
S2
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Takahito Tokita Fujitsu — -Announcer: Role as event announcer/host, expertise/title not mentioned
S3
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon — -Announcer: Role/Title: Event announcer/moderator; Areas of expertise: Not mentioned
S4
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — -Vineet Mittal: Chairman of Avada Group, renewable energy developer and expert
S5
https://dig.watch/event/india-ai-impact-summit-2026/powering-ai-_-global-leaders-session-_-ai-impact-summit-india-part-2 — Good evening, distinguished guests. Welcome to the session on powering AI. As AI scales at speed, so does its infrastruc…
S6
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — – Nathan Blom- Ashish Khanna – Vineet Mittal- Nathan Blom- Ashish Khanna
S7
https://dig.watch/event/india-ai-impact-summit-2026/powering-ai-_-global-leaders-session-_-ai-impact-summit-india-part-2 — Moderating this important conversation is Mr. Ashish Khanna, Director General of the International Solar Alliance. Mr. A…
S9
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — Good evening, distinguished guests. Welcome to the session on powering AI. As AI scales at speed, so does its infrastruc…
S10
https://dig.watch/event/india-ai-impact-summit-2026/powering-ai-_-global-leaders-session-_-ai-impact-summit-india-part-2 — Good evening, distinguished guests. Welcome to the session on powering AI. As AI scales at speed, so does its infrastruc…
S11
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S12
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S13
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S14
From KW to GW Scaling the Infrastructure of the Global AI Economy — The infrastructure demands represent a fundamental shift from traditional data centre design. The speakers noted that wh…
S15
AI energy demand accelerates while clean power lags — Data centres are driving asharp rise in electricity consumption, putting mounting pressure on power infrastructure that …
S16
https://dig.watch/event/india-ai-impact-summit-2026/heterogeneous-compute-for-democratizing-access-to-ai — That’s the edge cloud. And as you go deeper from there onwards, then you have the data centers. It then mitigates the ov…
S17
Climate change and Technology implementation | IGF 2023 WS #570 — One argument suggests that the internet and technology can enable innovative solutions by using artificial intelligence …
S18
Building Climate-Resilient Systems with AI — But here’s what we came up with. The first one, I mean, this is a kind of bottom line, but it’s important. AI does have …
S19
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — “as we go from one gig to nine to ten gig … we have to realize that india is challenged by three physical things that …
S20
Efforts to improve energy efficiency in high-performance computing for a Sustainable Future — The demand for high-performance computing (HPC) has surged due to technological advancements like machine learning, geno…
S21
Greening digital companies: — 5G is enabling energy reductions per unit of data that are not possible with older generations of mobile technology. As …
S22
Creating Eco-friendly Policy System for Emerging Technology — Data centers, which use emerging technologies, consume a lot of energy. Nevertheless, despite their numerous benefits, …
S23
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Anita Gurumurthy emphasised that despite improvements in chip efficiency, energy demand from data centres continues crea…
S24
Internet Governance Forum 2024 — AI emerged as a key technology with the potential toaccelerate progress on SDGs by up to 70%. From real-time policymakin…
S25
The Innovation Beneath AI: The US-India Partnership powering the AI Era — -Energy Grid Transformation and Clean Power: Detailed exploration of how AI’s massive energy demands require “programmab…
S26
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — To address this, companies are exploring innovative solutions such aspower capping(limiting processor power to 60-80% of…
S27
Keynote-Jeet Adani — As we all know, under peak load, advanced processors generate extraordinary heat. Systems throttle when power falters an…
S28
Indias Roadmap to an AGI-Enabled Future — Data centers of India. I mean, that’s the kind of thought that government needs to think. then we can become so that’s w…
S29
Shaping the Future AI Strategies for Jobs and Economic Development — This comment reframes the AI competition from a purely technological race to an economic sustainability challenge, intro…
S30
Business Engagement Session — Dr. Al-Surf highlights the importance of innovative energy efficiency technologies in addressing sustainability challeng…
S31
Powering the Technology Revolution / Davos 2025 — HPE’s liquid cooling technology reduces energy consumption by 90% compared to air cooling in data centers. Neri discuss…
S32
From principles to practice: Governing advanced AI in action — – **Implementation Challenges Across Jurisdictions**: Participants highlighted the tension between rapid technological a…
S33
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — To mitigate this,innovative cooling technologiessuch asimmersion coolingandliquid-to-liquid heat exchangersare gaining t…
S34
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — -Innovation in Cooling Technologies: The discussion explored critical innovations in data center cooling, moving from tr…
S35
Global AI Policy Framework: International Cooperation and Historical Perspectives — And these are principles that were established with VCs 20 years ago. And for us, these are non-negotiable foundations f…
S36
Main Session | Policy Network on Artificial Intelligence — Benifei argues for the importance of developing common standards and definitions for AI at a global level. He suggests t…
S37
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The discussion highlighted the importance of policy interoperability rather than uniform global governance, recognizing …
S38
AI could optimise power grids and reduce energy waste — AI could helpmake power grids cleaner and more efficientwhile reducing energy waste, even as data centres powering gener…
S39
Agentic AI and the new industrial diplomacy — Energy systemsare perhaps the most politically sensitive arena for agentic AI. As renewables grow, grids become harder t…
S40
AI power demand pushes nuclear energy back into focus — Rising AI-driven electricity demand isstraining power gridsand renewing focus on nuclear energy as a stable, low-carbon …
S41
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — “local job anchors if implemented and used correctly.”[73]. “They can be infrastructure for hospitals, for research, def…
S42
Data centre boom drives surge in legal services in India — India’s data centreexpansion, fuelled by investment inAI-ready infrastructure and cloud capacity, is creating strong dem…
S43
AN INTRODUCTION TO — Given the multi-disciplinary nature of Internet governance and the high diversity of actors and policy fora, it is parti…
S44
Government notices · GoewermentskennisGewinGs — –  Existing processes, procedures and fees are not streamlined: There is ‘ no central co-ordination, no consistency in …
S45
Bangladesh Rapid eTrade Readiness Assessment — The private sector has been intimately involved, with encouragement from the Government, in strategic planning rela…
S46
WS #484 Innovative Regulatory Strategies to Digital Inclusion — The disagreements are substantial enough to potentially impact policy coordination and resource allocation, particularly…
S47
TABLE OF CONTENTS — Ensuring sufficient investment and funding in for the policies and strategies outlined in the Policy will be critical to…
S48
Contents — It should be noted that liberalization in the GATS sense the granting of market access and national treatment – is not s…
S49
POLICY BRIEF — – Innovations that speed up cross-border commerce while ensuring trade compliance and lowering trade risks are i…
S50
WS #257 Emerging Norms for Digital Public Infrastructure — These key comments shaped the discussion by highlighting the complex, multifaceted nature of DPI. They moved the convers…
S51
WS #98 Universal Principles Local Realities Multistakeholder Pathways for DPI — Balancing national sovereignty with international interoperability Discussion of need for universal definition, common …
S52
WS #290 Sovereignty and Interoperable Digital Identity in Dldcs — Policy mapping of different regulations across countries is crucial for establishing trust frameworks Regional agreemen…
S53
From KW to GW Scaling the Infrastructure of the Global AI Economy — The infrastructure demands represent a fundamental shift from traditional data centre design. The speakers noted that wh…
S54
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — A central theme of Albertazzi’s presentation focused on the dramatic transformation occurring in data centre design due …
S55
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Anita Gurumurthy emphasised that despite improvements in chip efficiency, energy demand from data centres continues crea…
S56
Creating Eco-friendly Policy System for Emerging Technology — Nevertheless, despite their numerous benefits, emerging technologies present substantial challenges and risks. Foremost …
S57
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — “Friends, as we gather here in a nation racing towards digital sovereignty and sustainable growth, I want to emphasize a…
S58
Greener economies through digitalisation — Furthermore, greater stakeholder participation, particularly of Micro, Small, and Medium Enterprises (MSMEs), should be …
S59
The Innovation Beneath AI: The US-India Partnership powering the AI Era — -Energy Grid Transformation and Clean Power: Detailed exploration of how AI’s massive energy demands require “programmab…
S60
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — To address this, companies are exploring innovative solutions such aspower capping(limiting processor power to 60-80% of…
S61
Climate change and Technology implementation | IGF 2023 WS #570 — João Vitor Andrade:Hi, everyone. I’d like to thank you all to be present here today. My name is João Vitor, I’m from Bra…
S62
Indias Roadmap to an AGI-Enabled Future — Data centers of India. I mean, that’s the kind of thought that government needs to think. then we can become so that’s w…
S63
Keynote-Jeet Adani — As we all know, under peak load, advanced processors generate extraordinary heat. Systems throttle when power falters an…
S64
The Battle for Chips — In conclusion, India’s strategic approach to developing a comprehensive semiconductor ecosystem demonstrates a commitmen…
S65
Powering the Technology Revolution / Davos 2025 — – Andrés Gluski- Greg Jackson- Uljan Sharka HPE’s liquid cooling technology reduces energy consumption by 90% compared …
S66
Business Engagement Session — Dr. Al-Surf highlights the importance of innovative energy efficiency technologies in addressing sustainability challeng…
S67
Safe and Responsible AI at Scale Practical Pathways — The panel revealed that making data AI-ready is fundamentally a governance challenge rather than merely technical. The a…
S68
From principles to practice: Governing advanced AI in action — – **Implementation Challenges Across Jurisdictions**: Participants highlighted the tension between rapid technological a…
S69
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S70
Comprehensive Report: Cyber Fraud and Human Trafficking – A Global Crisis Requiring Multilateral Response — The tone began as deeply concerning and urgent, with speakers emphasizing the gravity and scale of the problem. However,…
S71
From Technical Safety to Societal Impact Rethinking AI Governanc — The discussion began with a formal, academic tone but became increasingly critical and urgent throughout. Speakers expre…
S72
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S73
WS #460 Building Digital Policy for Sustainable E Waste Management — The discussion maintained a professional, collaborative, and solution-oriented tone throughout. Speakers were constructi…
S74
Open Forum #13 Bridging the Digital Divide Focus on the Global South — The discussion maintained a consistently collaborative and solution-oriented tone throughout. Speakers acknowledged seri…
S75
Shaping the Future AI Strategies for Jobs and Economic Development — This comment reframes the AI competition from a purely technological race to an economic sustainability challenge, intro…
S76
Building the AI-Ready Future From Infrastructure to Skills — The tone was consistently optimistic and collaborative throughout, with speakers expressing excitement about AI’s potent…
S77
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The discussion maintained an optimistic and collaborative tone throughout, characterized by constructive problem-solving…
S78
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S79
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S80
Chief Economists’ Briefing: What to Expect in 2025? / DAVOS 2025 — The tone was generally serious and analytical, with economists offering measured but somewhat pessimistic views on globa…
S81
From India to the Global South_ Advancing Social Impact with AI — The discussion maintained an overwhelmingly optimistic and energetic tone throughout. It began with excitement about you…
S82
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — The tone was consistently optimistic and forward-looking throughout the conversation. Speakers expressed excitement abou…
S83
Keynote-Rishad Premji — Opening framing by the moderator
S84
Keynote-Olivier Blum — Moderator’s framing of the discussion
S85
Keynotes — At the European Dialogue on Internet Governance (EuroDIG) 2024, the imperative of multistakeholder collaboration in shap…
S86
WS #202 The UN Cybercrime Treaty and Transnational Repression — A panel of experts convened at the Internet Governance Forum to discuss the UN Cybercrime Treaty and its potential impli…
S88
Seismic Shift — More than 270 million people will be added to India’s urban population over the next two decades, and Oxford Economics p…
S89
https://dig.watch/event/india-ai-impact-summit-2026/the-global-power-shift-indias-rise-in-ai-semiconductors — And one of the changes that has happened, obviously India becoming the larger in terms of GDP size, consumer demand, peo…
S90
https://dig.watch/event/india-ai-impact-summit-2026/ai-and-data-driving-indias-energy-transformation-for-climate-solutions — A very important question indeed. When in the public policy, the equity is extremely important. And equity means the ent…
S91
Artificial intelligence (AI) – UN Security Council — During the9821st meetingof the Artificial Intelligence Security Council, a key discussion centered around whether existi…
S92
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — He’s building an AI to police, AI. And it’s an international effort, and he welcomes partnerships. We will be announcing…
S93
Cooperation for a Green Digital Future | IGF 2023 — Alexia Gonzalez Fanfalone:Thank you very much. Patrick. Everybody hears me okay? Yes? Yes. Okay. So thank you very much….
S94
Designing Indias Digital Future AI at the Core 6G at the Edge — Power consumption concerns are driving data centers toward edge deployment Roy emphasizes that infrastructure challenge…
S95
Meta eyes nuclear energy to power AI and data centres — Metahas announcedplans to harness nuclear energy to meet rising power demands and environmental goals. The company is so…
S96
Day 0 Event #260 Securing Basic Internet Infrastructure — Erica Moret: Well, thank you very much, first of all, for the kind invitation to join you today. You can hear me okay? Y…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Announcer
2 arguments88 words per minute177 words119 seconds
Argument 1
AI scaling is driving unprecedented power and cooling demands for data centers.
EXPLANATION
The announcer highlights that as AI models become larger and more numerous, the infrastructure required to run them consumes massive amounts of electricity, creating new challenges for data centre operators.
EVIDENCE
The speaker notes that a single large AI training run can consume as much electricity as thousands of homes in a year and that data centres are facing unprecedented power and cooling requirements as AI scales at speed [5][4][3].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The unprecedented power and cooling needs are documented in [S5] and [S4], which note that a single large AI training run can consume electricity comparable to thousands of homes and that data centres face record-high power and cooling requirements. Higher rack power densities are also highlighted in [S1] and [S14].
MAJOR DISCUSSION POINT
AI growth creates massive energy demand for data centres
Argument 2
Planning for rapidly rising and uncertain energy demand requires evaluating edge computing versus centralisation.
EXPLANATION
The announcer raises the strategic question of how to meet the accelerating energy needs of AI, asking whether decentralised edge solutions can alleviate the load or whether centralised data centres remain inevitable.
EVIDENCE
The speaker explicitly asks how to plan for rising and uncertain energy demand and whether edge computing can reduce the load or centralisation is inevitable [6][7][8].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The strategic trade-off between edge and centralised data-centre capacity is discussed in [S16], which describes how edge cloud can mitigate overall data-centre demand, and in [S5] which calls for planning around rising energy needs.
MAJOR DISCUSSION POINT
Strategic planning for AI energy demand
V
Vineet Mittal
5 arguments141 words per minute1722 words732 seconds
Argument 1
AI makes renewable energy dispatchable and grid‑stable by predicting generation with climatic and satellite data.
EXPLANATION
Mittal explains that AI can process large volumes of weather and satellite information to forecast solar and wind output, allowing renewable plants to be scheduled and dispatched like conventional thermal generators.
EVIDENCE
He describes using AI together with climatic data, weather department data, low-earth-orbit satellite data to predict generation and then schedule and dispatch power in 15-minute intervals, making renewables behave like conventional thermal power [151][152][153][154][155][156].
MAJOR DISCUSSION POINT
AI enables renewable predictability and dispatch
AGREED WITH
Ashish Khanna
Argument 2
India’s abundant solar, wind, water and pumped‑storage resources uniquely position it to provide 24/7 green power for data centres.
EXPLANATION
Mittal points out that the complementary nature of India’s renewable resources, combined with pumped‑storage and battery capacity, can deliver continuous power, making the country an ideal location for large‑scale data centre deployment.
EVIDENCE
He notes that sun and wind are complementary, can generate 14-18 hours of power, which is then supplemented by pumped storage and batteries to achieve round-the-clock green power, and cites India’s massive solar-wind addition of 50 GW this year [165][166][167][168][169][170].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Mittal’s points are corroborated by [S4], which details India’s complementary solar-wind generation, pumped-storage and the addition of 50 GW of renewable capacity annually, enabling round-the-clock green power for data centres.
MAJOR DISCUSSION POINT
India’s renewable mix can power data centres continuously
Argument 3
Recent policy measures such as tax exemptions for foreign‑collaboration data centres and open‑access power trading create a favourable environment for data‑centre growth in India.
EXPLANATION
Mittal highlights that the Indian budget introduced tax incentives for data centres with foreign components and that the power market allows real‑time, open‑access trading, both of which lower costs and improve flexibility for operators.
EVIDENCE
He references the budget scheme granting tax exemption for data centres with foreign collaboration [226][227] and describes how open-access, real-time power trading gives flexibility to data centres [280][281][282][283].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
[S4] reports the budget-driven tax exemption scheme for data centres with foreign components and the open-access, real-time power-trading framework that lowers costs and adds flexibility for operators.
MAJOR DISCUSSION POINT
Policy incentives boost data‑centre investment
AGREED WITH
Ashish Khanna, Raghav Chandra
DISAGREED WITH
Raghav Chandra
Argument 4
While ease of doing business varies across Indian states, the government is ranking states and streamlining permits to accelerate data‑centre deployment.
EXPLANATION
Mittal acknowledges uneven business environments but notes that the central government is creating a state‑ranking system and simplifying land‑allocation, permitting and incentives, especially in states like Maharashtra.
EVIDENCE
He mentions the stack-ranking of states, streamlined permitting, and strong support from Maharashtra for data-centre projects [242][243][244][245].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
State-level reforms, including a ranking system and streamlined land-allocation and permitting-especially in Maharashtra-are described in [S4].
MAJOR DISCUSSION POINT
State‑level reforms improve data‑centre rollout
AGREED WITH
Ashish Khanna, Raghav Chandra
DISAGREED WITH
Raghav Chandra
Argument 5
Data sovereignty and localisation are essential; India should ensure Indian user‑generated content resides domestically to drive infrastructure planning.
EXPLANATION
Mittal argues that keeping data within national borders will encourage investment in local power and data‑centre capacity, aligning with broader data‑sovereignty initiatives.
EVIDENCE
He calls for a data-sovereignty act that mandates Indian user content be stored in India, linking it to grid planning and large-scale data-centre capacity [194][195][196].
MAJOR DISCUSSION POINT
Data localisation supports domestic infrastructure
N
Nathan Blom
3 arguments173 words per minute697 words240 seconds
Argument 1
Transitioning from air‑cooled to liquid‑cooled data‑centre architectures is crucial for improving energy efficiency.
EXPLANATION
Blom explains that liquid cooling, originally developed for the Apollo program, directly removes heat from chips via a cold plate, offering a more efficient alternative to traditional air‑cooling systems.
EVIDENCE
He describes the liquid-cooling technology that pumps ethylene or propylene glycol mixed with water through a pipe to a cold plate, capturing heat directly from the chip, and notes its long history since the 1960s [249][250][251][252][253][254].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift to liquid cooling and its efficiency benefits are highlighted in [S4] (innovation in cooling technologies) and supported by broader HPC efficiency discussions in [S20].
MAJOR DISCUSSION POINT
Liquid cooling enhances data‑centre efficiency
AGREED WITH
Vineet Mittal
Argument 2
Emerging two‑phase cooling technology can deliver 10‑20 times higher heat‑removal efficiency, dramatically lowering PUE values.
EXPLANATION
Blom highlights that two‑phase cooling, where the liquid boils and vaporises, provides a far more effective heat‑transfer mechanism, potentially reducing PUE from around 1.5 to 1.05.
EVIDENCE
He explains the principle of two-phase cooling, its superior heat-capture efficiency, and quantifies the expected PUE improvement from 1.5 to 1.05, representing a massive step-function increase in efficiency [259][260][261][262][263][264].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
[S4] explains two-phase cooling’s superior heat-transfer, projecting PUE reductions from ~1.5 to ~1.05, i.e., a 10-20× efficiency gain, and [S20] references similar advances in high-performance computing cooling.
MAJOR DISCUSSION POINT
Two‑phase cooling promises major PUE gains
AGREED WITH
Vineet Mittal
Argument 3
Small, innovative companies will drive breakthrough cooling solutions and will later be integrated into larger firms, shaping the industry’s future.
EXPLANATION
Blom argues that the cooling ecosystem thrives on agile startups that pioneer new technologies; as these solutions mature, larger corporations will acquire them, accelerating widespread adoption.
EVIDENCE
He notes that small companies are spearheading the two-phase technology and that they are expected to be bought up by larger firms, leading to industry-wide deployment [260][261][262].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of agile startups in pioneering cooling breakthroughs, later to be acquired by larger corporations, is noted in [S4].
MAJOR DISCUSSION POINT
Start‑ups are the engine of cooling innovation
A
Ashish Khanna
5 arguments153 words per minute1530 words598 seconds
Argument 1
AI can enable decentralized renewable integration and peer‑to‑peer power trading, reducing system costs and supporting the rapid growth of AI workloads.
EXPLANATION
Khanna outlines that AI‑driven digitisation can help distribution companies manage large numbers of rooftop and battery assets, facilitating P2P power markets and lowering overall energy costs for AI‑intensive applications.
EVIDENCE
He cites that 40 % of recent solar growth is decentralized, that AI can help distribution companies absorb this and reduce costs, and that the International Solar Alliance has launched a global AI-for-Energy mission to address these challenges [20][21][22][25][26][27][28].
MAJOR DISCUSSION POINT
AI as a tool for decentralized renewable integration
AGREED WITH
Vineet Mittal
Argument 2
Data centres and cooling are the largest drivers of the current surge in global electricity consumption, and must be addressed to sustain AI development.
EXPLANATION
Khanna points out that data‑centre and cooling loads now dominate electricity growth, especially in the US and China, and that without innovative solutions the energy demand for AI will become unsustainable.
EVIDENCE
He states that data centres and cooling are the world’s biggest sources of electricity consumption increase, that 70 % of current data-centre demand is in the US and China, and that demand is projected to double every three years, especially in developing countries [54][55][56][58][59][60][66][67].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
[S15] warns that AI-driven data-centre growth is accelerating electricity demand faster than clean-power supply, making data-centre and cooling loads the dominant source of recent electricity consumption increases; [S5] also emphasizes the scale of AI-related power use.
MAJOR DISCUSSION POINT
Energy for AI is dominated by data‑centre and cooling loads
AGREED WITH
Announcer, Raghav Chandra
Argument 3
Global interoperable standards and coordinated policy are essential for scaling AI‑energy solutions worldwide.
EXPLANATION
Khanna stresses that without common standards and regulatory alignment, the deployment of AI‑driven energy systems will be fragmented, hindering global adoption.
EVIDENCE
He mentions the need for interoperable standards, noting that the world is not united on how AI-energy integration will be done, and that the International Solar Alliance is involved in shaping this global dimension [50][51][52][53].
MAJOR DISCUSSION POINT
Need for global standards in AI‑energy integration
AGREED WITH
Raghav Chandra, Audience
Argument 4
Financing and de‑risking AI‑energy projects are critical, especially for developing countries that lack venture capital and commercial loan access.
EXPLANATION
Khanna highlights that without appropriate financial mechanisms, many promising AI‑energy innovations will not scale, underscoring the importance of new industry‑wide financing models.
EVIDENCE
He raises questions about how financing and de-risking will be done, noting the scarcity of venture capital and commercial loans in many regions [41][42][43][44][45][46][47][48][49].
MAJOR DISCUSSION POINT
Financial mechanisms are needed for AI‑energy scaling
AGREED WITH
Vineet Mittal, Raghav Chandra
Argument 5
Regulatory evolution is required to enable P2P power trading and the digital enablement of millions of consumers and producers.
EXPLANATION
Khanna argues that current regulations limit the ability of consumers to trade power generated from rooftop solar and batteries, and that AI‑driven digital platforms need supportive policy frameworks to function at scale.
EVIDENCE
He describes the need for regulatory evolution to allow P2P trading of power from rooftop and battery assets, requiring IT architecture and digital enablement for millions of participants [31][32][33].
MAJOR DISCUSSION POINT
Regulation must adapt for AI‑enabled P2P energy markets
AGREED WITH
Vineet Mittal, Raghav Chandra
DISAGREED WITH
Vineet Mittal
R
Raghav Chandra
5 arguments129 words per minute2453 words1140 seconds
Argument 1
Energy availability is the single greatest constraint on AI’s future, as data‑centre power shortages cause major service disruptions.
EXPLANATION
Chandra emphasizes that without reliable electricity, AI workloads cannot be sustained, citing multiple high‑profile outages that illustrate the fragility of current power supplies for data centres.
EVIDENCE
He references incidents at Meta’s nuclear-powered data centre plan, a March 2025 Google Cloud outage in Ohio, an AWS Virginia blackout in 2019, and similar failures at Microsoft Azure and TikTok, all caused by power loss and inadequate backup systems [75][77][78][80][81][82][83][84][86][87][88][89][90][91][92][93][94][95][96].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Reliability challenges such as voltage swings in Virginia that tripped dozens of data centres are documented in [S4], underscoring power-availability as a critical constraint.
MAJOR DISCUSSION POINT
Power reliability is critical for AI infrastructure
AGREED WITH
Ashish Khanna, Audience
Argument 2
Data‑centre electricity consumption will grow to levels comparable with entire national grids, creating significant environmental and social costs.
EXPLANATION
Chandra projects that global data‑centre electricity use will rise from 415 TWh (1.5 % of world consumption) to nearly 945 TWh (3 %) by 2030, equating to the power demand of whole countries and raising concerns about emissions and resource use.
EVIDENCE
He provides figures on current consumption (415 TWh, 1.5 % of global electricity) and future projections (945 TWh, 3 %), noting that this is comparable to the power demand of nations like Australia or Spain [112][113][114][115][116][117][118][119][120].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
[S15] projects global data-centre electricity use rising to ~945 TWh (≈3 % of world consumption) by 2030, a level comparable to the demand of whole countries.
MAJOR DISCUSSION POINT
Data‑centre growth threatens national‑scale energy balances
Argument 3
Continued reliance on fossil‑fuel power for data‑centres will sharply increase emissions; a shift to renewable energy is essential for a virtuous path.
EXPLANATION
He argues that if clean supply does not keep pace, data‑centres could account for up to 40 % of new fossil generation, urging a transition to renewable sources to mitigate climate impact.
EVIDENCE
He notes that big-tech emissions have risen 30-50 % since 2020, that data-centres could claim 40 % of new fossil generation if clean supply lags, and calls for choosing a virtuous path [121][122][123][124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
[S15] warns that without clean-power supply, data-centres could account for up to 40 % of new fossil generation, urging a transition to renewables; [S4] also highlights India’s renewable potential as an alternative.
MAJOR DISCUSSION POINT
Renewables needed to curb data‑centre emissions
Argument 4
Rising power costs, grid reliability issues, and equity concerns create social challenges for communities near data‑centres.
EXPLANATION
Chandra points out that electricity prices have surged 200‑250 % in some US regions, that voltage swings cause outages, and that nearby communities face noise, heat, and land‑use conflicts, highlighting the need for equitable burden sharing.
EVIDENCE
He cites wholesale electricity price jumps of 200-250 % over five years, voltage swings tripping dozens of centres, and mentions equity issues such as noise, heat, land-use conflicts, and the digital divide in developing nations [125][126][127][128][129][130][131][132][133][134][135][136][137].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
[S4] cites wholesale electricity price spikes of 200-250 % and voltage-swing-induced outages, and mentions equity impacts such as noise, heat, land-use conflicts and the digital divide affecting nearby communities.
MAJOR DISCUSSION POINT
Energy costs and equity affect data‑centre siting
Argument 5
Lack of coordination between central and state governments in India is a major bottleneck for data‑centre expansion and energy planning.
EXPLANATION
Chandra, drawing on his extensive administrative experience, argues that fragmented governance hampers the implementation of large‑scale data‑centre projects, and calls for better synergy and streamlined processes.
EVIDENCE
He describes the absence of synergy between states and the centre, cites an example of a foreign company unable to progress after eight presentations, and stresses the need for coordinated policy and technology adoption [214][215][216][217][218][219][220][221][222][223][224][225][226][227][228][229][230][231][232][233][234][235][236][237][238][239][240][241].
MAJOR DISCUSSION POINT
Governance coordination needed for data‑centre growth
AGREED WITH
Ashish Khanna, Vineet Mittal
DISAGREED WITH
Vineet Mittal
A
Audience
1 argument175 words per minute67 words22 seconds
Argument 1
A clear understanding of the global ramifications—both positive and negative—of powering AI is essential for responsible policy making.
EXPLANATION
The audience member requests clarification on how AI’s energy demands affect the world, indicating concern that the impacts extend beyond national borders and encompass environmental, social, and economic dimensions.
EVIDENCE
Umesh Prasad Singh, an associate member of the Indian Institute of Public Administration, asks the panel to clarify the positive and negative global ramifications of powering AI [292][293][294][295][296].
MAJOR DISCUSSION POINT
Need for insight into global impacts of AI energy use
AGREED WITH
Ashish Khanna, Raghav Chandra
Agreements
Agreement Points
AI scaling creates unprecedented power and cooling demand for data centres.
Speakers: Announcer, Ashish Khanna, Raghav Chandra
AI scaling is driving unprecedented power and cooling demands for data centres. Data centres and cooling are the largest drivers of the current surge in global electricity consumption, and must be addressed to sustain AI development. Energy availability is the single greatest constraint on AI’s future, as data‑centre power shortages cause major service disruptions.
All three speakers stress that the rapid growth of AI is leading to massive electricity and cooling needs for data centres, making energy availability a critical bottleneck for AI progress [3-5][54-57][66-67][112-119][120].
POLICY CONTEXT (KNOWLEDGE BASE)
The surge in AI workloads has driven data-centre power densities from 10-20 kW per rack to 30-50 kW and beyond, prompting concerns about electricity consumption comparable to small cities and prompting calls for policy attention [S53][S54]. The EU’s Energy Efficiency Directive is also pressuring operators to adopt more efficient cooling solutions to curb rising demand [S33].
AI can enable renewable energy dispatchability and grid stability.
Speakers: Ashish Khanna, Vineet Mittal
AI can enable decentralized renewable integration and peer‑to‑peer power trading, reducing system costs and supporting the rapid growth of AI workloads. AI makes renewable energy dispatchable and grid‑stable by predicting generation with climatic and satellite data.
Both speakers highlight that AI-driven forecasting and digital platforms can turn intermittent solar and wind into dispatchable, grid-stable power, facilitating large-scale AI workloads [20-26][31-33][151-155][156].
POLICY CONTEXT (KNOWLEDGE BASE)
AI-driven forecasting and optimisation are recognised as tools to improve grid dispatchability, reduce renewable curtailment and enhance stability, as highlighted in research on AI-enabled grid management [S38] and discussions on industrial diplomacy around cross-border energy balancing [S39]. Policy briefs also note AI’s role in supporting renewable investments [S41].
Innovation in cooling (liquid and two‑phase) is essential for improving data‑centre energy efficiency.
Speakers: Nathan Blom, Vineet Mittal
Transitioning from air‑cooled to liquid‑cooled data‑centre architectures is crucial for improving energy efficiency. Emerging two‑phase cooling technology can deliver 10‑20 times higher heat‑removal efficiency, dramatically lowering PUE values.
Both speakers argue that moving away from traditional air-cooling to liquid-based solutions-especially emerging two-phase systems-can cut PUE dramatically and reduce the overall power burden of AI data centres [249-254][259-264][267-270][271-274].
POLICY CONTEXT (KNOWLEDGE BASE)
Emerging liquid-immersion and two-phase cooling technologies are cited as ways to cut water use by up to 55 % and lift PUE from ~1.5 to 1.05, with regulatory pressure from the EU’s Energy Efficiency Directive encouraging adoption [S33][S34].
Supportive policy, regulatory and financing frameworks are needed to scale AI‑energy solutions.
Speakers: Ashish Khanna, Vineet Mittal, Raghav Chandra
Financing and de‑risking AI‑energy projects are critical, especially for developing countries that lack venture capital and commercial loan access. Regulatory evolution is required to enable P2P power trading and the digital enablement of millions of consumers and producers. Recent policy measures such as tax exemptions for foreign‑collaboration data centres and open‑access power trading create a favourable environment for data‑centre growth in India. While ease of doing business varies across Indian states, the government is ranking states and streamlining permits to accelerate data‑centre deployment. Lack of coordination between central and state governments in India is a major bottleneck for data‑centre expansion and energy planning.
All three speakers stress that coordinated policy, clear regulations (including P2P trading), targeted financial incentives and streamlined business processes are essential to unlock AI-driven energy projects and data-centre growth, particularly in developing economies [31-33][41-49][50-53][226-227][242-245][214-224][225-236][237-241].
POLICY CONTEXT (KNOWLEDGE BASE)
International AI policy frameworks stress the need for coordinated standards, financing mechanisms and regulatory support to scale energy-efficient AI, as outlined in the Global AI Policy Framework and calls for enabling investment environments [S35][S47].
Powering AI has global ramifications and requires interoperable standards.
Speakers: Ashish Khanna, Raghav Chandra, Audience
Global interoperable standards and coordinated policy are essential for scaling AI‑energy solutions worldwide. Energy availability is the single greatest constraint on AI’s future, as data‑centre power shortages cause major service disruptions. A clear understanding of the global ramifications—both positive and negative—of powering AI is essential for responsible policy making.
The panel agrees that AI-driven data-centre power use has worldwide environmental, economic and social impacts, making common standards and a shared understanding of these ramifications crucial [50-53][297-304][292-296].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple forums have advocated for common AI standards and interoperable digital infrastructure to manage cross-border impacts, including proposals for global norms by Benifei [S36] and emphasis on policy interoperability rather than uniform governance [S37][S49][S51][S52].
Similar Viewpoints
Both see AI as the key technology to turn intermittent renewable sources into reliable, dispatchable power for AI workloads, leveraging digital platforms and forecasting to lower system costs [20-26][31-33][151-155][156].
Speakers: Ashish Khanna, Vineet Mittal
AI can enable decentralized renewable integration and peer‑to‑peer power trading, reducing system costs and supporting the rapid growth of AI workloads. AI makes renewable energy dispatchable and grid‑stable by predicting generation with climatic and satellite data.
Both emphasize that next‑generation cooling—liquid and two‑phase—will be decisive for reducing data‑centre power consumption and enabling large‑scale AI deployments [249-254][259-264][267-270][271-274].
Speakers: Nathan Blom, Vineet Mittal
Transitioning from air‑cooled to liquid‑cooled data‑centre architectures is crucial for improving energy efficiency. Emerging two‑phase cooling technology can deliver 10‑20 times higher heat‑removal efficiency, dramatically lowering PUE values.
All three agree that coordinated policy, financing and regulatory reforms are essential to scale AI‑energy solutions and data‑centre capacity, especially in emerging markets [31-33][41-49][50-53][226-227][242-245][214-224][225-236].
Speakers: Ashish Khanna, Raghav Chandra, Vineet Mittal
Financing and de‑risking AI‑energy projects are critical, especially for developing countries that lack venture capital and commercial loan access. Regulatory evolution is required to enable P2P power trading and the digital enablement of millions of consumers and producers. Recent policy measures such as tax exemptions for foreign‑collaboration data centres and open‑access power trading create a favourable environment for data‑centre growth in India. While ease of doing business varies across Indian states, the government is ranking states and streamlining permits to accelerate data‑centre deployment.
Unexpected Consensus
Gaming industry as a driver for cooling innovation in AI data centres.
Speakers: Nathan Blom, Vineet Mittal
Transitioning from air‑cooled to liquid‑cooled data‑centre architectures is crucial for improving energy efficiency. Emerging two‑phase cooling technology can deliver 10‑20 times higher heat‑removal efficiency, dramatically lowering PUE values. Gaming and best actually because the large batteries requires the same amount of cooling…
Both speakers, coming from different backgrounds, unexpectedly converge on the observation that the gaming sector’s demand for high-performance GPUs and large battery packs is spurring the development of advanced cooling technologies that will also benefit AI data-centre efficiency [249-254][259-264][267-270].
Overall Assessment

The panel shows strong consensus that AI’s rapid growth is driving massive energy and cooling needs, that AI can be leveraged to make renewable energy dispatchable, that innovative cooling technologies are essential, and that coordinated policy, regulatory and financing mechanisms are required. There is also agreement on the global nature of the challenge and the need for interoperable standards.

High consensus across technical, policy and environmental dimensions, indicating a unified view that addressing power for AI will require integrated solutions spanning AI, renewable integration, cooling innovation and supportive governance. This alignment suggests that future initiatives can build on shared priorities without major ideological friction.

Differences
Different Viewpoints
Effectiveness of Indian policy and regulatory environment for data‑centre expansion
Speakers: Raghav Chandra, Vineet Mittal
Lack of coordination between central and state governments in India is a major bottleneck for data‑centre expansion and energy planning. Recent policy measures such as tax exemptions for foreign‑collaboration data centres and open‑access power trading create a favourable environment for data‑centre growth in India. While ease of doing business varies across Indian states, the government is ranking states and streamlining permits to accelerate data‑centre deployment.
Raghav argues that fragmented governance and poor centre-state synergy hinder data-centre projects, citing a foreign company stalled after eight presentations [214-241]. Vineet counters that the Indian budget now offers tax exemptions for foreign-partner data centres and that a state-ranking system and streamlined permits, especially in Maharashtra, are already improving the business climate [226-227][242-245].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s rapid data-centre expansion has generated demand for legal and regulatory guidance, highlighting both opportunities and challenges in land acquisition, approvals and compliance under current policies [S42].
Readiness of regulatory framework for peer‑to‑peer power trading
Speakers: Ashish Khanna, Vineet Mittal
Regulatory evolution is required to enable P2P power trading and the digital enablement of millions of consumers and producers. Policy measures such as open‑access, real‑time power trading gives flexibility to data centres.
Ashish stresses that new regulations are needed for consumers to trade power from rooftop solar and batteries, requiring IT architecture for millions of participants [31-33]. Vineet says India already provides open-access, real-time power trading that allows data centres to obtain flexible clean power, implying the regulatory gap is already addressed [280-283].
Unexpected Differences
Government coordination vs optimism about reforms
Speakers: Raghav Chandra, Vineet Mittal
Lack of coordination between central and state governments in India is a major bottleneck for data‑centre expansion and energy planning. While ease of doing business varies across Indian states, the government is ranking states and streamlining permits to accelerate data‑centre deployment.
Both speakers are senior Indian officials, yet Raghav highlights systemic governance failures while Vineet portrays a rapidly improving policy landscape, an unexpected contrast given their shared national perspective. [214-241][242-245]
POLICY CONTEXT (KNOWLEDGE BASE)
Observations of fragmented procedures, lack of central coordination and inconsistent pricing underscore coordination gaps that may hinder reform optimism, as documented in government notices and analyses of policy coherence challenges [S44][S46][S43].
Global interoperable standards vs national data‑sovereignty
Speakers: Ashish Khanna, Vineet Mittal
Global interoperable standards and coordinated policy are essential for scaling AI‑energy solutions worldwide. Data sovereignty and localisation are essential; India should ensure Indian user‑generated content resides domestically to drive infrastructure planning.
Ashish calls for worldwide standards to avoid fragmentation, whereas Vineet pushes for a national data-localisation act, revealing a tension between global harmonisation and national control that was not anticipated. [50-53][194-196]
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between universal digital infrastructure standards and national sovereignty is a recurring theme in DPI discussions, with calls for regional trust frameworks that balance interoperability with data-sovereignty concerns [S37][S51][S52].
Overall Assessment

The panel largely concurs on the urgency of addressing AI’s energy demand, but diverges on policy effectiveness, regulatory readiness, and the balance between global standards and national data‑sovereignty. Disagreements centre on Indian governance (coordination vs reforms) and on whether new regulations are needed for P2P trading. These gaps could slow coordinated action unless reconciled.

Moderate – while there is shared recognition of the problem, the differing views on policy and regulatory pathways create noticeable friction that may affect implementation speed and coherence.

Partial Agreements
All speakers agree that reducing the energy burden of AI‑driven data centres is essential, but they propose different pathways: Raghav focuses on reliability and renewable transition, Ashish on AI‑driven market mechanisms and standards, Nathan on advanced cooling technologies, and Vineet on AI‑based renewable forecasting and abundant Indian resources. [73-80][112-119][27-30][50-53][249-254][259-264][151-155]
Speakers: Raghav Chandra, Ashish Khanna, Nathan Blom, Vineet Mittal
Energy availability is the single greatest constraint on AI’s future, as data‑centre power shortages cause major service disruptions. AI can enable decentralized renewable integration and peer‑to‑peer power trading, reducing system costs and supporting the rapid growth of AI workloads. Transitioning from air‑cooled to liquid‑cooled data‑centre architectures is crucial for improving energy efficiency. AI makes renewable energy dispatchable and grid‑stable by predicting generation with climatic and satellite data.
Takeaways
Key takeaways
AI’s rapid growth is driving unprecedented electricity and cooling demand in data centers, now ~1.5% of global electricity and projected to reach ~3% by 2030. Energy reliability is the greatest constraint for AI; recent outages at major cloud providers highlight the risk of power shortages. Advanced cooling technologies (liquid and two‑phase) can dramatically improve Power Usage Effectiveness (PUE) and reduce overall power consumption. AI can enable renewable integration by providing high‑resolution forecasting and dispatchability, turning intermittent solar/wind into reliable 24/7 power. India possesses strategic advantages: abundant solar, wind, pumped‑storage, a unified national grid, low per‑capita electricity use, cheap broadband, and a large AI talent pool. Policy and regulatory alignment (central‑state coordination, data‑sovereignty rules, tax incentives, interoperable standards) are critical to attract data‑center investment. Financing, de‑risking mechanisms, and skill‑development (ISA’s AI‑for‑Energy mission and ISI Academy) are needed to build a sustainable AI‑energy ecosystem.
Resolutions and action items
ISA will launch the AI‑for‑Energy mission and establish the ISI Academy to train professionals at the intersection of AI and energy. Indian government to continue tax‑exemption scheme for foreign‑collaborative data centers and to streamline permitting (e.g., Maharashtra model). Call for improved coordination between central and state authorities to synchronize policies, land allocation, water use, and cooling technology adoption. Encourage development of interoperable standards for renewable‑powered data centers and P2P power‑trading platforms. Promote R&D and support for liquid and two‑phase cooling startups, leveraging cross‑industry expertise (clean‑room, battery cooling).
Unresolved issues
How to create a unified, nation‑wide regulatory framework that balances data‑sovereignty, pricing, and environmental safeguards. Financing and de‑risking models for large‑scale green data‑center projects, especially in regions lacking venture capital. Water scarcity management for liquid cooling solutions and the environmental impact of large‑scale cooling operations. Specific mechanisms for real‑time grid integration of renewable generation with AI‑driven dispatch at 15‑minute intervals. Global governance of standards and cross‑border data‑center impacts; no consensus on interoperable standards yet.
Suggested compromises
Leverage India’s existing coal‑dominant grid pragmatically while aggressively expanding renewable capacity to meet reliability needs. Adopt a hybrid cooling approach: combine proven liquid cooling with emerging two‑phase technologies to balance efficiency and water use. Use AI to improve both energy efficiency of data centers and the efficiency of renewable generation, mitigating the paradox of AI’s own power demand. Encourage state‑level incentives (e.g., Maharashtra) while pursuing a central policy framework to ensure consistent ease of doing business across states.
Thought Provoking Comments
I will call it AI for Energy and Energy for AI – we need AI to enable decentralized solar, P2P power trading, and an ISI Academy to train engineers at the intersection of AI and energy.
Sets a dual‑frame that reframes the whole debate, highlighting that AI is both a driver of energy demand and a tool to solve energy challenges, and introduces concrete initiatives (global AI mission, P2P trading, training academy).
Establishes the two‑sided lens that structures the rest of the conversation; prompts panelists to address both supply‑side (renewables, grid) and demand‑side (data‑center power) issues, and leads directly to the first round of opening statements.
Speaker: Ashish Khanna
The single greatest constraint on AI’s future is not algorithms or chips, but energy for AI‑based data centres – illustrated by high‑profile outages at Meta, Google Cloud, AWS and Azure.
Uses concrete, high‑profile failure cases to argue that reliability of power is the bottleneck for AI scaling, shifting the focus from pure compute to systemic energy security.
Triggers a shift from abstract discussion of demand to concrete reliability concerns; other speakers reference the need for backup, grid stability, and regulatory support, deepening the analysis of risk.
Speaker: Raghav Chandra
Global data‑centre electricity consumption is already 1.5 % of world use and could rise to 3 % by 2030 – equivalent to the power demand of whole countries – with serious environmental, social and equity costs.
Provides striking quantitative context that frames data‑centres as a macro‑economic and environmental force, not a niche issue, and links energy use to emissions, price spikes, and equity.
Leads the panel to discuss mitigation strategies (renewables, efficiency, cooling) and brings policy and social justice dimensions into the conversation.
Speaker: Raghav Chandra
AI can make intermittent solar and wind dispatchable at 15‑minute intervals by fusing climatic data, satellite observations and real‑time forecasting, turning renewables into a stable grid resource.
Introduces a concrete technical breakthrough—AI‑driven ultra‑short‑term forecasting—that directly addresses the intermittency problem of renewables, turning a perceived limitation into an opportunity.
Shifts the dialogue toward how AI can solve the supply‑side challenge, prompting further discussion of India’s renewable capacity, storage options, and the role of AI in grid management.
Speaker: Vineet Mittal
Two‑phase cooling technology, where the liquid boils and vaporises, can improve PUE from ~1.5 to ~1.05, dramatically cutting the electricity needed for data‑centre cooling.
Highlights an emerging, high‑impact innovation that tackles the biggest energy consumer within data centres—cooling—by orders of magnitude, and frames it as a startup‑driven breakthrough.
Spurs a focused discussion on the innovation ecosystem, leading other panelists to talk about startup involvement, scaling of new cooling tech, and the need for policy to support rapid adoption.
Speaker: Nathan Blom
The biggest bottleneck in India is the lack of synergy between centre and states, and the uneven ease of doing business; without coordinated policy, even well‑funded projects stall.
Moves the conversation from technology to governance, pinpointing a systemic obstacle that could undermine all technical solutions, and calls for concrete institutional reform.
Redirects the panel to address regulatory reforms, prompting Vineet and Ashish to discuss state‑level incentives, data‑sovereignty laws, and the need for a unified national strategy.
Speaker: Raghav Chandra
India’s single, real‑time interconnected grid, abundant solar‑wind‑water resources, and pumped‑storage capability make it uniquely positioned to host gigawatt‑scale, 24/7 green data centres.
Combines geographic, infrastructural, and policy strengths into a compelling argument that India can become a global data‑centre hub, linking energy abundance to data‑sovereignty and economic growth.
Reinforces the optimism theme, influencing Ashish’s later question about policy and prompting other panelists to acknowledge India’s competitive advantage while also noting the regulatory challenges.
Speaker: Vineet Mittal
Overall Assessment

The discussion was shaped by a handful of pivotal remarks that moved the conversation from high‑level optimism to concrete challenges and solutions. Ashish’s framing of AI‑for‑Energy and Energy‑for‑AI set the dual‑lens agenda. Raghav’s vivid illustration of power reliability failures and his macro‑scale electricity statistics forced the panel to confront the gravity of energy constraints. Vineet’s AI‑enabled renewable dispatch concept and his articulation of India’s unique grid turned the debate toward actionable supply‑side innovations. Nathan’s breakthrough cooling technology introduced a tangible demand‑side efficiency lever. Finally, Raghav’s critique of regulatory fragmentation and Vineet’s emphasis on India’s systemic advantages highlighted the governance dimension that can either enable or block the technical advances. Together, these comments redirected the flow from abstract enthusiasm to a nuanced, multi‑layered dialogue about technology, policy, innovation ecosystems, and geopolitical opportunity.

Follow-up Questions
Is the policy and regulatory landscape in India and other developing countries conducive to promoting data centers?
Evaluating existing regulations, incentives, and barriers is essential to enable rapid data‑center expansion in emerging markets.
Speaker: Ashish Khanna
What will the innovation landscape for cooling look like – will it be driven by startups, large firms, or both?
Understanding the mix of innovators informs investment, research focus, and speed of adoption of efficient cooling technologies.
Speaker: Ashish Khanna
What interoperable global standards are needed for AI‑energy integration?
A lack of unified standards hampers cross‑border collaboration and deployment of AI‑driven energy solutions.
Speaker: Ashish Khanna
How can financing and de‑risking models be developed for renewable‑powered AI infrastructure in emerging markets?
Capital constraints limit deployment; innovative financing mechanisms are required to scale clean AI data centers.
Speaker: Ashish Khanna
How should data‑sovereignty legislation be implemented to ensure Indian user data remains locally stored?
Legal frameworks are needed to attract data‑center investment while protecting national data interests.
Speaker: Vineet Mittal
How can coordination between central and state governments be improved to streamline data‑center approvals and infrastructure deployment?
Current inter‑governmental bottlenecks slow down scaling; better alignment would accelerate projects.
Speaker: Raghav Chandra
What water‑scarcity‑aware cooling solutions (e.g., liquid, two‑phase) are viable for data centers in water‑limited regions?
Cooling consumes significant water; sustainable methods are critical where water resources are constrained.
Speaker: Raghav Chandra, Nathan Blom
How can two‑phase cooling technology be scaled and commercialized for data‑center use?
Emerging two‑phase cooling promises high efficiency but requires pathways to mass adoption and integration.
Speaker: Nathan Blom
What are the environmental and social impacts (noise, heat, land‑use) of large data‑center clusters on local communities?
Assessing equity and community effects is necessary to mitigate negative externalities of data‑center growth.
Speaker: Raghav Chandra
How can AI be used to optimize renewable generation dispatch at 15‑minute intervals to support grid stability for AI workloads?
Fine‑grained AI‑driven dispatch can make intermittent renewables reliably serve power‑hungry AI data centers.
Speaker: Vineet Mittal
What is needed to develop AI‑enabled peer‑to‑peer (P2P) power trading platforms for millions of prosumers?
P2P trading could democratize energy markets and support decentralized renewable integration, but requires regulatory and technical frameworks.
Speaker: Ashish Khanna
How can AI models and hardware be designed to be more energy‑efficient, reducing data‑center power consumption?
Lowering compute energy demand directly lessens the overall electricity and carbon footprint of AI services.
Speaker: Raghav Chandra
What strategies improve the reliability of backup power systems (UPS, generators) for AI data centers to prevent cascading outages?
Past grid failures highlight the need for robust backup solutions to ensure continuous AI service availability.
Speaker: Raghav Chandra
What will be the impact of AI‑driven data‑center growth on global electricity consumption and carbon emissions by 2030?
Quantifying macro‑level effects guides policy and investment decisions toward sustainable AI expansion.
Speaker: Raghav Chandra
How can pumped storage combined with AI provide 24/7 renewable power for data centers?
Integrating storage and AI scheduling could overcome intermittency and deliver continuous clean energy to compute facilities.
Speaker: Vineet Mittal
How can innovations from the gaming industry (e.g., GPU cooling) be transferred to improve data‑center cooling technologies?
Cross‑industry technology transfer may accelerate adoption of efficient cooling solutions for large‑scale AI workloads.
Speaker: Vineet Mittal

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit

Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel at the India AI Impact Summit examined how artificial intelligence (AI) can be layered on digital public infrastructure (DPI) to accelerate development outcomes, with the moderator noting India’s pragmatic stance of adopting multiple global models rather than focusing solely on AGI or job-privacy concerns [5-11][6-10]. He then invited Dr Hans Wijayasuriya to discuss the government-level priorities when integrating AI with DPI [13-15].


Dr Hans outlined four guiding dimensions-​inclusion, integrity, safeguards and sovereignty​-that must be front-and-center for governments [20-28]. He stressed that AI should not replace mature DPI foundations such as clean data, robust APIs and institutional capacity, but act as a scaffolding to improve citizen experience [30-34][35-36]. Inclusion can be advanced through voice-first and translation services that reduce digital divides, while safeguards require bias detection, explainability and human-in-the-loop controls [22-25][37-42]. Sovereignty, he added, means building neutral, vendor-agnostic capabilities that preserve state control over data classification and privacy [44-45].


Robert of UNDP highlighted DPI’s population-scale reach and warned that without early safeguards it can amplify problems, so inclusion must be a primary KPI from the design stage [55-58][63-68]. He described a universal DPI safeguards framework being piloted in several countries, embedding multilingual, multimodal access and bias-checking before AI is layered on [69-79]. Sangbu Kim of the World Bank argued that DPI provides essential interoperability for the emerging AI-enabled (AEI) era and that AI can quickly upgrade legacy DPI platforms, creating demand-driven use cases [81-92][182-191].


Saibal Chakraborty noted that India’s open, population-scale DPI such as Aadhaar and UPI has become a benchmark, spawning over 120 unicorns that leverage these platforms [96-102][104-106]. He said the next step is treating AI as a shared public infrastructure-providing affordable compute (e.g., 38,000 GPUs at less than $1 per hour) and controlled access to government data through platforms like AI Coach-to stimulate startups in underserved sectors [122-130]. From a policy perspective, he urged the creation of accountable institutions at both central and state levels that can safely expose data to innovators while protecting sovereignty, citing Telangana’s Section 8 public-sector undertaking as a model [198-213].


The panelists agreed that AI, when built on robust DPI foundations and governed by inclusion, integrity, safeguards and sovereignty, can deliver customized citizen services and unlock opportunities for populations previously left out of the digital revolution [214-218]. They concluded that the coming years will see a new wave of private-sector innovation driven by AI-enabled DPI, provided governments and multilateral institutions act now to embed safeguards and demand-focused use cases [214-218].


Keypoints

Major discussion points


Four government-centric pillars for AI-enabled DPI:


Dr. Hans emphasized that governments must prioritize inclusion (ensuring new capabilities reduce rather than widen divides) [20-24], integrity (building on mature DPI foundations such as clean data, robust APIs, and institutional capacity before layering AI) [28-34], safeguards (bias detection, explainability, human-in-the-loop to prevent harm at scale) [37-44], and sovereignty (maintaining neutral, controllable technology stacks and data protection) [44-46].


Early-stage safeguards and inclusive design are essential:


Robert highlighted that DPI’s population-scale reach brings both opportunity and risk, and that embedding safeguards from the outset is critical to avoid large-scale failures [55-58]. He stressed that inclusion must be a primary KPI, not an afterthought, and that AI-driven DPI must incorporate multilingual, multimodal, and bias-aware components from the beginning [63-69][78-79].


India’s DPI experience as a template for AI-driven private-sector innovation:


Saibal noted that India is viewed globally as a benchmark for open, population-scale DPI (Aadhaar, UPI) that has spawned a wave of unicorns [97-101]. He argued that AI should be treated as shared public infrastructure-providing affordable compute (e.g., 38,000 GPUs at <$1/hr) and early access to government data-to unlock startups in underserved sectors such as climate, education, and MSMEs [122-127][128-132].


Policy and institutional frameworks to balance data access with sovereignty:


Saibal warned that governments must walk a “tightrope” between exposing valuable public data for innovators and protecting sovereign, privacy-sensitive information[198-205]. He recommended creating accountable, state-level institutions (e.g., Section-8 public sector undertakings) that can set agile AI policies while safeguarding data [206-212].


Shift of development banks toward AI demand creation and use-case pathways:


Sangbu described DPI’s role in interoperability and “small AI” to generate demand in regions with good connectivity but low usage [81-90][182-190]. Robert outlined UNDP’s 100 Diffusion Pathways initiative, a use-case-driven effort to scale responsible AI across sectors, complementing internal capacity-building and country-level support [168-179].


Overall purpose / goal


The panel aimed to explore how Artificial Intelligence can be layered onto Digital Public Infrastructure to accelerate development outcomes, while ensuring that inclusion, integrity, safeguards, and sovereignty are embedded from the start. Participants shared lessons (notably India’s DPI model), identified policy and institutional needs, and outlined how multilateral institutions and the private sector can jointly foster an AI-ready ecosystem for underserved populations.


Overall tone and its evolution


– The discussion opened with optimistic celebration of India’s DPI achievements and the promise of AI [5-11].


– It then moved into a cautious, analytical tone, focusing on risks, safeguards, and the need for robust foundations [16-46][55-69].


– As the conversation progressed, speakers adopted a forward-looking, collaborative tone, highlighting concrete initiatives (AI Coach, 100 Pathways) and policy recommendations [81-90][122-132][168-179].


– The session concluded on a hopeful and encouraging note, emphasizing the potential to bring “billions of people” into the digital future through AI-enabled DPI [214-218].


Speakers

Speaker 1


– Role/Title: (not specified)


– Area of Expertise: Event host / moderator


Sangbu Kim


– Role/Title: Vice President for Digital, World Bank [S4]


– Area of Expertise: Digital public infrastructure, AI, World Bank development initiatives


Robert Opp


– Role/Title: Chief Digital Officer, United Nations Development Programme (UNDP) [S7]


– Area of Expertise: Digital safeguards, AI for development, UNDP programs


Saibal Chakraborty


– Role/Title: Managing Director and Senior Partner, Boston Consulting Group [S9]


– Area of Expertise: AI, digital public infrastructure, private-sector innovation, policy advising


C.V. Madhukar


– Role/Title: Chief Executive Officer, CoDevelop (moderator) [S12]


– Area of Expertise: AI, digital public infrastructure, facilitation of panel discussions


Dr. Hans Wijayasuriya


– Role/Title: (government representative – senior official, Sri Lanka)


– Area of Expertise: Government digital strategy, DPI implementation, AI governance and safeguards


Additional speakers:


Arjun – Mentioned as having introduced the panel; role and expertise not specified in the transcript.


Full session reportComprehensive analysis and detailed insights

The session opened with moderator C.V. Madhukar reflecting on the four-day AI dialogue, praising India’s leadership in Digital Public Infrastructure (DPI) and contrasting its pragmatic approach with the United States’ focus on artificial general intelligence, jobs and privacy concerns [1-4][5-11]. He noted that India is “drawing on a Chinese model, an American model … a whole bunch of other innovations” and intends to “embrace and use all of this for our benefit.” He then invited Dr Hans Wijayasuriya, representing the Sri Lankan government, to outline the sovereign priorities when AI is layered onto DPI [13-15].


Four inter-linked pillars presented by Dr Hans were:


* Inclusion – AI must not widen divides; it should reduce them through voice-first, translation, multimodal access and, where needed, a human-in-the-loop [20-25].


* Integrity – AI should rest on a mature DPI foundation of clean data, data-maturity, reliable APIs and institutional capacity before it can act as “scaffolding to accelerate delivery” [28-34].


* Safeguards – Bias detection, consent augmentation, explainability and human oversight are essential because “bias, opacity, at scale would mean harm at scale” [37-44].


* Sovereignty – Nations need neutral, vendor-agnostic technology stacks, clear data-classification and privacy controls to retain “neutral capability … so that you have control” [44-45].


When the moderator asked a forward-looking question about the “long-run” for a small island nation, Dr Hans responded specifically about Sri Lanka. He identified three immediate challenges – building a minimum sovereign AI infrastructure, retaining talent, and establishing trusted data-protection institutions [147-151] – and highlighted a strength: the ability to implement “modular systems in a neat and flexible way.” He explained that, with a solid DPI base, AI can deliver “digital-twin-style, citizen-specific services” at lower cost and higher speed, and projected that Sri Lanka could make “big advances on DPI and AI” within the next two to three years [155-162].


Robert Opp of UNDP reinforced the centrality of early safeguards. He warned that “if efficiency is the only metric, you will rush ahead and leave people out” and stressed that “inclusion must be the primary KPI” [60-62]. He described a “universal DPI safeguards framework” that is now being piloted in several countries with partners such as Co-Develop and the Gates Foundation [55-58]. UNDP is also building internal AI capability through up-skilling programmes, acquiring foundation-model tools and developing service-level modules (SLMs) to embed multilingual, multimodal access and bias-checking into DPI platforms [80-84].


Sangbu Kim of the World Bank positioned DPI as the essential interoperability backbone for the emerging AI-enabled Interoperability (AEI) era. She traced the evolution “computer → mobile → AEI” and argued that AI can “quickly streamline all DPI platforms” while automating “old manual data-governance processes” [90-95]. Acknowledging DPI’s limits, she emphasized its role in preventing siloed approaches and in creating demand through “small-AI” pilots that turn existing 3G-plus coverage in sub-Saharan Africa into tangible AI-driven services [182-190][186-191].


Saibal Chakraborty, senior partner at the Boston Consulting Group, used India as a benchmark. He recalled how open, population-scale DPI such as Aadhaar and UPI “triggered innovation” and underpinned the emergence of “120 unicorns” [96-102]. Building on this, the Indian AI Mission launched the “AI Coach” platform and a state-level analogue (TGDX), treating AI as a “shared public infrastructure” akin to DPI [122-124]. A concrete illustration is the provision of “more than 38 000 GPUs … at less than ₹60 per hour (≈ $1/hr)” [124-128], dramatically lowering the compute barrier for early-stage startups. To address funding gaps in climate, education and MSME sectors, the government is considering “fund-of-funds” mechanisms that encourage co-investment by venture capitalists [129-130]. Saibal warned that policy must “walk the tightrope” between exposing valuable government data to innovators and protecting sovereignty, recommending the creation of “accountable institutions” – for example, a Section-8 public-sector undertaking in Telangana – to govern data access and AI policy at both central and state levels [190-196].


Across the discussion, a strong consensus emerged:


1. Inclusion and early-stage safeguards are non-negotiable.


2. A mature, interoperable DPI is a prerequisite for any AI overlay.


3. Affordable compute and shared AI platforms are essential catalysts for private-sector innovation.


4. Accountable, neutral institutions are required to balance data openness with sovereign control.


These points were repeatedly reinforced by Dr Hans, Robert, Sangbu and Saibal [20-34][37-44][55-66][90-95][122-128].


Key takeaways


* Governments should embed the four pillars when integrating AI with DPI, ensure clean data, robust APIs and institutional capacity before adding AI, and embed bias detection, explainability and human-in-the-loop safeguards [20-34][37-44].


* UNDP will continue piloting its universal safeguards framework, expand up-skilling, and launch a partnership with Xstep to develop “100 Diffusion Pathways,” a use-case-driven roadmap for responsible AI [55-58][80-84][168-179].


* The World Bank will promote demand creation through small-AI pilots and enhance DPI interoperability to unlock AI services in underserved regions [182-191].


* India, via BCG and government initiatives, will expand shared AI infrastructure (AI Coach, TGDX), provide low-cost GPU compute, and establish fund-of-funds mechanisms to channel investment into socially sensitive sectors [122-130][124-128].


* All participants urged the creation of accountable, possibly statutory, institutions to govern data sharing while safeguarding sovereignty [190-196][209-212].


Unresolved issues


* Concrete mechanisms for controlled government data sharing that protect sovereignty while enabling innovation remain undefined.


* Strategies to sustain demand for AI services in regions with high connectivity but low utilisation need further elaboration.


* Funding models that effectively direct venture capital into climate, education and MSME AI applications are still in development.


* Metrics for measuring inclusion impact, bias detection and safeguard effectiveness at scale were mentioned but not detailed.


* Small nations require clearer pathways for scaling sovereign AI infrastructure beyond modular DPI, especially regarding talent pipelines and long-term sustainability.


In closing, the moderator emphasized that AI-enabled DPI is at a pivotal moment, poised to unlock large-scale inclusive innovation and bring “billions of people who were left out of the digital revolution, especially those who rely on voice-first interfaces,” into the fold [214-218]. He thanked the panelists and invited participants to continue the dialogue at the summit’s expo, which will remain open [220-225].


Session transcriptComplete transcript of the session
Speaker 1

economy. Saibal Chakraborty, Managing Director and Senior Partner, Boston Consulting Group. The moderator, C .V. Madhukar, Chief Executive Officer of CoDevelop. And Mr. Sangbi Kim, Vice President for Digital, World Bank, who will be joining us in a bit. Thank you. right I will let the moderator take it forward

C.V. Madhukar

thank you so much thank you Arjun and thank you all to the panelists the last session of the last day is a bit challenging so we will try and keep this focused and at the end if you have any questions we have time it would be great to have any questions if you want to have any discussions I think we have heard a lot about AI and DPI in the last four days I don’t want to belabor the point I think what we can celebrate for sure in India is that in terms of DPI and the thinking that we take to the world and to our own problems is amongst the best in the world.

And as we look at the journey on AI, which is just beginning for most of the world, what I see is if I look at the US, for instance, there is one spectrum of conversation, which is AGI and beyond. And on the other end, there is a despondency and worry and concern about jobs and privacy and a whole bunch of other important considerations. I think where India is, is somewhat different. India is saying, look, there will be a Chinese model, there will be an American model, there will be a whole bunch of other innovations that are going on. What do we do to embrace and use all of this for our benefit? I think that optimism and sense of can do and must do, I think is very exciting, and I think it’s been palpable in the last four or five days.

So to discuss the power of AI and DPI and AI in DPI, we have a wonderful panel as Arjun has introduced already. I will start with Dr. Hans, if you don’t mind. I think you represent a national government and you are living through these choices on a day -to -day basis. As a pragmatic government that has to think about sovereignty, inclusion, safeguards, what are the main considerations that are on top of your mind?

Dr. Hans Wijayasuriya

Thanks for the question. You’re right, I think those three words are very key. When you’re talking from a government perspective and therefore national, inclusion, integrity or the safety of the citizens. and also safeguards. I’ll just run through these three very quickly. Why inclusion? When it’s government and you introduce a new capability, you have to be sure that that capability does not increase divides, that it actually reduces divides. So you have to be very sensitive on the inclusion angle. And, for instance, AI, together with DPI, can stretch inclusion through its voice -first capabilities, we talked about cloud -first, API -first, et cetera. Now we can really seriously talk about voice -first and also the translation capabilities.

And accessibility has to be also broadened in terms of multi -modality and also, where necessary, include a human in the loop in the service delivery cycle as well. So some of the inclusion dimensions. Then more to integrity, which is a tougher one, And I think that’s a really good point. And I think that’s a really good point. An important point when you’re looking at integrity is to start from the premise that AI will not redefine DPI. AI would, or at least where we look at it from now, maybe I’ll be wrong in six months from today, but the DPI foundations must be in place first. DPI should be mature, and your approach to implementing DPI should be mature, and then you apply AI as a scaffolding on top of that foundation to accelerate your build and delivery.

So what are those foundations? Clean data and data maturity, maturity of your data architectures, clean registers, for instance, also reliable APIs, APIs which are not susceptible to cyber attacks, et cetera, and also the institutional capacity. The institutional capacity behind the DPI delivery. So these are foundations which should be in place. Now, on top of that, you apply AI and there are several unique features of AI which would deliver you super experience to the citizen. And I’ll come to that in a short time. The last one would be the safeguards, bias detection, and also the augmentation to consent. Because when you have AI systems in place, your consent could be AI generated as well. So you need to be careful about the augmentation you require, explainability, and human in the loop as well in terms of your safeguards.

So we need to be conscious that bias, opacity, at scale would mean harm at scale as well. So everything AI scales. So we need to be conscious that bias, opacity, at scale would mean harm at scale as well in terms of your safeguards. So we need to be conscious that bias, opacity, at scale would mean harm at scale as well in terms of your safeguards. Sovereignty is all about capability. It’s not isolation. it’s about building the capability in terms of being neutral and having a neutral capability in terms of vendors, in terms of cloud, in terms of technologies so that you have control, that the state has control over those building blocks and can choose across technology delivery parts as well as core technologies also data classification, protection, privacy basics, so I think if you go back to the foundations of protecting your citizens and their data you can’t go wrong and AI will only be an accelerator finally the experience angle, because a government would be very keen to deliver super experience and where does AI come in here I believe there are some specific strengths of AI when combined with DPIs and that is in terms of in particular API orchestration picking the right API to call at the right time also the fact that AI enables unconstrained scenarios so while linear methodologies would be constrained to say 4 or 5 scenarios when a citizen needs some help AI can do a billion scenarios and each scenario could be implemented using a unique set of API calls and unique set of DPI access and this can be orchestrated using AI so can AI make a big difference a leap forward in service delivery?

I believe so can governments and the sovereign use it or should they? Definitely but we need to be conscious of those 4 dimensions that I just mentioned of inclusion, integrity, safeguards, sovereignty all of these are important and we are still coming together to deliver a perfect experience

C.V. Madhukar

Thank you, Dr. Hans. I will give a bit of a breather to Sangbu. Thank you, Sangbu. Great to have you. I’ll go to Robert first and then… Robert, as you at UNDP have led a very important work on safeguards, worked on the Global Digital Compact, engaged a number of countries on sustainable development goals and how AI and DPI can be an accelerant to all of those outcomes if you want. From your vantage point as you look at the AI revolution that’s unfolding upon us, what captures, what’s top of mind for UNDP?

Robert Opp

Yeah, no. So I think that the reason we have been so excited about digital public infrastructure as an approach overall is that it really does bring some very particular characteristics. And one of those, maybe the most important, is the population scale. and so it is something that can reach so many people so quickly if you get it right. We also have been learning that if you don’t get it right, then you can have problems and challenges at scale and so in the DPI space, one of the things that concern us the most is how do we ensure that as countries are building their DPI, how do we make sure that we are putting the safeguards in place?

And this is work that has been supported generously by Codevelop and Gates Foundation and others and has led over the last year and a half or so to the creation of a universal DPI safeguards framework which we’re now implementing or supporting a number of countries at national level to implement. But… But when we talk about that, what does it actually mean? And Dr. Hans referred to some of these important things. And so we are talking about the DPI as a whole. And so we are talking about the DPI as a whole. And so we are talking about the DPI as a whole. And it’s about what we’re learning is that the earlier in the process that you can start discussing the safeguards, the better off you’ll be down the road in terms of inclusion.

So if efficiency is your only metric, then you will probably rush ahead and leave people out. But if inclusion is your driving KPI, then you really need to make sure that you’re sitting down at the beginning and planning and designing with people in mind. When it comes to AI, then, it’s basically the same thing. And we need to be really careful, as Dr. Hans was saying, that we are putting the inclusion aspects, the safeguard aspects, at the center of our planning from the very beginning. And so that means, and you referred to a couple of these, but, you know, a multilingual platform, multimodal that can support people with disabilities, making sure that you are correcting for people.

So that’s one of the things that we’re doing. And I think that’s a really good thing. And I think that’s one of the things that we’re doing. And I think that’s one of the things that we’re doing. And I think that’s one of the things that we’re doing. And I think that’s one of the things that we’re doing. And I think that’s one of the things that we’re doing. And I think that’s one of the things that we’re doing. And I think that’s one of the things that we’re doing. And I think that’s one of the things that we’re doing. And I think that’s one of the things that we’re doing. and then of course making sure that data sets are there’s bias detection or you’ve got some understanding of your accuracy representation all of those kinds of things because when you’re going to layer AI into DPI as an accelerator well then you want to be sure that you’re on the right track and that people are considered from the very beginning with that and so I think that’s we see all the

C.V. Madhukar

Thank you Rob Can I go to Sangbu now as you see the evolution of DPI plus AI from the World Bank standpoint one of the things that we’ve often looked at is is having open data sets that can train air engines would be an important way of advancing the benefits of AI but as you know governments build silos one silo after the other how are you seeing this from the World Bank’s standpoint what is the next journey for the next 3 -5 years looking like in terms of getting countries to become more AI ready any thoughts on that would be great

Sangbu Kim

DPI has a lot of aspects and characteristics this is one of the very productive and efficient way to ensure the interoperability even though it cannot ensure everything but we make some more effort to clarify some more interoperability capability so but it is basically what it looked like. So that’s why in AEI era, in order to prevent some siloed approach, DPI can play a very significant role, I would say. If you think about the AEI era, what is the relationship between DPI and AEI? If I just compare the DPI from the previous version of DPI, I would say DPI is more helpful for AEI era compared to the previous mobile era in many reasons. If you look back on our history of computing, we started from the computer and PC and evolved to mobile and evolved to AEI.

The trend is that it is from the very supplier -centric approach through the user -centric approach. We are evolving from the supplier mindset through the user mindset. The DPI is exactly just some of the really, really important tools to ensure that user -centricity. Because without identifying some good tools of users and some interoperability, it cannot really be achieved to fully support the user -customized things. So to me, the DPI and AI will be really important relationships. On the other hand, one opportunity… I also see through this trend… maybe DPI can be also very well upgraded quickly and efficiently upgraded through AI enabled technology so we used to collect all the data in a very manual way and then try very hard to streamline the governance of data sometimes very manually and sometimes by some intervention by the programmer but now we can see some old medical way so that we can quickly streamline all the DPI platforms so this is really a good opportunity for all of us from the World Bank perspective we really expect to see some more progress of this space

C.V. Madhukar

Wonderful Saibal as a senior partner at Boston Consulting Group you have a bird’s eye view of a lot of early thinking on AI led innovation around the world, especially in the private sector. And I’m also sure you’ve been observing closely the India trajectory of DPI in the last decade or so. What lessons from the DPI journey in India can we take to the AI era that might also propel private sector innovation to levels beyond if we just didn’t think about DPI as core infrastructure for AI?

Saibal Chakraborty

So I think, firstly, India’s journey in DPIs has been a fascinating one. It makes me immensely proud that whichever country I go to, and I do work with quite a few South Asian and Southeast Asian countries, India is almost always seen as a benchmark in DPI, and now increasingly AI on top of DPI. I think, so maybe I’ll just answer your question in two parts, right? I mean, when we were building DPIs, starting with Aadhaar, you know, and then, you know, moving on to UPI, et cetera, the idea was always to build open population scale software, which can then trigger innovation, right? And that’s exactly what has happened over the last decade, right? The amount of innovation that has been built on these DPIs is mind -blowing.

I mean, India is now a country of 120 unicorns, and every unicorn, some way or the other, leverages the DPIs. So then, coming to the second part of your question, I mean, what lessons, right? So if you look at the way India is now thinking about AI, and as BCG, we have been privileged to be part of two of the very leading companies in the world, right? So we have done several seminal efforts, one with India AI Mission to build AI Coach, India’s national AI platform. and also with the state of Telangana to build the equivalent for the state of Telangana. Both of these happened last year. I think we are taking a very, very similar ethos, right?

So if you think about India, as I mentioned, there are 120 unicorns. There is no dearth of VC funding in India at all. However, 90 % of that VC funding actually goes into fintech and e -commerce. Very little goes into climate and sustainability. Very little goes into education. Very little goes into MSME relevant topics. So there is a gap, right? So what these platforms are trying to do? And then similarly, access to data within private sector, there is good quality data. But the biggest source of data in India is the government. The access to government data is still at a very nascent state. Quality of data and access to data. And then, of course, the biggest thing in AI, which is compute, right?

I mean, access to compute is what makes or breaks a startup. So the way in India, the way I see it, the way we have started thinking about AI platforms, and I’ll use the word platform, it treats AI as a shared public infrastructure. Just like DPI was a shared public infrastructure, it treats now AI as a shared public infrastructure. If you look at India AI mission, more than 38 ,000 GPUs are now available at, you know, less than rupees 60 per hour, which is less than a dollar per hour. So if you are a startup, very early stages, working in your garage, suddenly GPUs have become a bit more affordable for you, right? That’s genuine shared public infrastructure.

Government data, it’s early days, it’s very early days, but government data is being provided access to through platforms like AI Coach, or in Telangana through the TGDX. to solve the financing problem. How do you channel financing into socially sensitive sectors? The central government and the states are thinking of building fund of funds, which actually then, you know, encourage VCs to co -invest, focusing on those socially sensitive sectors where they would normally not invest in. So if you think about it, the ethos is very similar to when we were building DPIs. How do I create shared capabilities centrally, which then can trigger an entire new wave of startups, you know, and therefore a market ecosystem, just like the DPIs did.

So that’s how I see the journey.

C.V. Madhukar

That’s great insight. I think it seems from what you say, there’s a beginning of a new innovation cycle for the private sector cycle, and we’re looking forward to what comes out. I just wanted to, we have about 10 minutes left. have a somewhat common forward -looking question to all panelists. As the economists say, in the long run, we’re all dead. But what is long run in the AI ecosystem? Is it five years? Is it three years? Because it’s so hard to predict everything. Every day there’s something new happening. So I don’t know, Dr. Hans, if you’re okay going first on this forward -looking question. The question I would ask of you is, as a relatively smaller island nation, how do you expect to leverage this wave of AI innovation over the next three years, maybe five years?

What steps are you anticipating? How are you preparing yourselves to leverage this power that we have to advance our development outcomes?

Dr. Hans Wijayasuriya

So there’s a plus and a minus of being small. Let’s start with some of the challenges. Being small, you’re on the wrong side of the AI divide unless you’re economically in a very powerful position. Say for example, Singapore is an outlier, a small country with a lot of economic power and therefore attracts investors, attracts talent and the sighting of business. Country, one of the challenges we have in Sri Lanka is one, getting that minimum level of sovereign AI infrastructure in place, having the ecosystems around it, retaining our talent. Sri Lanka has very good talent but retaining and developing the talent for Sri Lanka is a challenge but one that we are confident that we can deliver.

So future of AI I think will depend on the market. We’ll also depend on the people. We’ll also depend on the trust in us. The institutions like the data protection, the institutions as well as the laws, and Sri Lanka is very mature on this front. So on the trust side, a smaller country can execute precisely with laser -sharp focus and therefore has a strength. Talent side, again, it’s something to focus on. Now when it comes to the marriage with DPI, I think it falls onto the positives of being a small country because the ability to implement modular systems in a neat and flexible way, in a way that these blocks themselves will evolve, on the confidence that you have a strong trust environment, gives you the ability to build in AI where, like I mentioned earlier, AI sits.

On top of a solid DPI, a mature DPI frame. So I feel the future, AI future will of course be very close, can add that extra piece of experience, lower cost, faster, and more flexible, meaning that it can address multiple scenarios through digital twin and other such AI constructs to deliver citizens a very customized, I’m from the service industry in the past, so I use the word customized, but citizen -specific experience. We’ve been tracking the learning about the focus of the new government and their leadership and the presence of AI in the world. We’re working with the government’s leadership, making big advances on DPA and AI, and looking forward to much exciting stuff in Sri Lanka in the next two to three years.

C.V. Madhukar

Thank you for those comments, Dr. Hans. Rob, could I come to you and think about, you know, you’ve gone through the process over the last couple of years with DPI safeguards. You have the Global Digital Compact. I know there’s a lot of work you’re doing on AI safeguards. Moving away from safeguards, I wanted to see how you are envisioning the developmental role of UNDP leveraging AI in the next three years. I guess three years is long term, but anything you can say that would be helpful for us.

Robert Opp

No, absolutely. So I think there’s a couple levels. So there is an internal to the organization level, which is how do we ensure that UNDP itself has capabilities for leveraging AI to maximum effect. And so there’s a kind of a base level of work that we’ve done internally to the organization, upskilling programs, investing in some, you know, making sure foundation model capabilities are available, working in some SLMs, et cetera, et cetera. Then there’s the layer of working across the kind of verticals that UNDP has, whether it’s environmental action, governance programs, energy, et cetera, et cetera. And so how do we embed AI solutions and thinking and approaches into those verticals? And then there’s the picture of how are we going to support our country partners?

And as you said, we’re engaged in quite a few countries already on AI transformation support, and it’s kind of looking at ecosystem pieces. Do countries have that mix of elements that Dr. Hans was referring to? Do you have the compute accessibility? Do you have talent? Do you have the data available? And so on and so forth. but the one thing that we’ve announced in this summit, during this summit is an exciting partnership with Xstep and a number of other players and we’re one of those players on something we’re calling 100 Pathways or Diffusion Pathways and that is looking kind of like more of a use case driven approach and finding over the next few years 100 different pathways to scaling responsible use of AI along different use cases and it’s something that we’re really excited about because I think we need that to complement the ecosystem support

C.V. Madhukar

That’s so exciting to hear because I think there will be a lot of iteration discovery and innovation to discover those 100 pathways to actually add value to people on the ground looking forward to what comes out of that Thank you if you were to look at the last decade of how development banks have funded and looked at digitization, and now if you look at the three years ahead of you, what might the World Bank and MDBs do differently to be ready, help countries become more ready for the AI era that you haven’t been doing in the last decade or so? Any thoughts would be great.

Sangbu Kim

So good news is that from the Internet network point of view, the coverage is pretty good. So even in sub -Saharan Africa, more than 90 % of area of sub -Saharan Africa is covered by three -plus generation mobile tower. But the issue is that we are really struggling with lack of demand. How are we going to fully utilize this by creating some more value and profit? So now we are modifying some approach to really think about creating demand through government program, through developing some use cases. That’s why we are just keep highlighting the importance of small AI. Small AI is not really a small thing. So it is really about how we can really change the lives of our people.

So our approach is a little tweaking to the user -centric and demand -driven things. That’s our approach.

C.V. Madhukar

That’s great. And I think MDBs and the relationships that they have with countries can make a big difference in how the evolution and the benefits of AI will come to people. So looking forward to what comes out. Saibal, I know I started off by bucketing you as a private sector guy. But I also know you’ve been thinking deeply about government policy that enables or doesn’t enable innovation and growth. So as you look at the next few years, even from private sector innovation lens, what government policies might propel the innovation ecosystem? to serve the underserved populations around the world. Any quick thoughts, Saibal?

Saibal Chakraborty

So, see, I think, you know, the government has a very tricky balancing act, right? I mean, when we were going through this entire experience of building AI Kosh, there’s obviously, for very good reasons, a lot of sensitivity around sovereign data, what data to expose, and if exposed to whom, right? Equally, without sharing of data, I mean, the reason why AI has taken off in a big way in some sectors is because internet came into picture 30 years back, and internet has been pumping, you know, billions and billions and gigabytes of data, right? So AI has something to chew upon. Now, if we have to do the same with government data, then that data, you know, needs to be exposed in a controlled manner.

So my sense is that from a policy standpoint, how do you actually provide that access to data? I mean, walking that tightrope where valuable data is made available to the innovators while not compromising on sovereignty or safety. I think that is one of the policy areas the government has to look at. Specifically for a country like India or similar countries which operate in a federated model, the center can do only so much. The real action, as we know, in a country like India happens at the state level and we have 30 plus of them combining states and union territories. So at the state level also, similar policies and institutions have to be set up. So Telangana, for example, has set up a Section 8 public sector undertaking to drive AI, right?

That creates the kind of focus and the agility that you will need to keep pace with this technology and do some real work at the grassroots. So my suggestion would be create those. Accountable institutions. who can anchor and drive AI and amend policies around data to make sure that people get access, the innovators get access while not compromising on sovereignty. It’s not an easy thing to do, but yeah, those would be my words.

C.V. Madhukar

Thank you, Saibal. I think the role of institutions, both for safeguards, but also to enable the innovation ecosystem haven’t been more important than it’s now more important than ever before. I think it’s hard to summarize this conversation, but I will say that I think we’re at the cusp of something extremely important and something very potent in some ways and can unlock a lot of opportunity for innovation. billions of people. Especially, I think, the segment of the population that was left out of the digital revolution because voice was not the predominant way of interacting. I think AI opens up that window and hopefully will drive much more widespread adoption and usage by common people around the world.

Looking forward to the next few years and thank you very much to our panelists for a wonderful discussion. Thank you all. Thank you.

Speaker 1

Thank you so much to all the speakers. We will just have one memento being given by the organizing team. We made it. Thank you so much for being a part of the India AI Impact Summit. Just to tell you that the expo will be open tomorrow. People still want to come in and people are still not tired, but we are done for today and the sessions are done. Thank you so much. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (39)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“C.V. Madhukar is the Chief Executive Officer of CoDevelop and served as the moderator for this session.”

The knowledge base identifies C.V. Madhukar as CEO of CoDevelop and the moderator of the AI in DPI session [S9].

Confirmedhigh

“Madhukar contrasted India’s pragmatic DPI approach with the United States’ focus on artificial general intelligence, jobs and privacy concerns.”

A transcript excerpt notes the US conversation centered on AGI, jobs and privacy, while India takes a different, pragmatic stance, matching Madhukar’s comment [S10].

Confirmedhigh

“India’s leadership in Digital Public Infrastructure is widely recognized.”

Multiple sources highlight India’s leadership and its model being cited for global replication in DPI development [S48] and [S115].

Additional Contextmedium

“India’s DPI approach is uniquely positioned because it operates in multilingual, low‑resource environments, making its frameworks especially relevant for developing countries.”

The knowledge base explains that India’s AI discourse is shaped by multilingual populations and infrastructure constraints, underscoring its distinctive position [S115].

Additional Contextmedium

“Digital Public Infrastructure serves as the essential interoperability backbone for the emerging AI‑enabled Interoperability (AEI) era.”

Broader analyses describe DPI as a critical layer for AI integration, emphasizing data sovereignty, resilience and secure compute as prerequisites for trustworthy AI systems [S101] and [S104].

External Sources (124)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S5
S6
High-Level Session 4: From Summit of the Future to WSIS+ 20 — – Robert Opp: Representative from UNDP Robert Opp’s comment broadened the discussion on environmental sustainability: “…
S7
WS #278 Digital Solidarity &amp; Rights-Based Capacity Building — Robert Opp: Okay. Thank you. Well, it’s a pleasure to be here. As Jennifer said, I’m Robert Opp. I come from the Unite…
S8
Day 0 Event #189 Toward the Hamburg Declaration on Responsible AI for the SDG — – CLAIRE: No role/title mentioned ROBERT OPP: Okay. Hello, everyone. This is a strange way of doing a workshop with e…
S9
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — economy. Saibal Chakraborty, Managing Director and Senior Partner, Boston Consulting Group. The moderator, C.V. Madhukar…
S10
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-ai-in-digital-public-infrastructure-dpi-india-ai-impact-summit — economy. Saibal Chakraborty, Managing Director and Senior Partner, Boston Consulting Group. The moderator, C.V. Madhukar…
S11
Agents of Change AI for Government Services &amp; Climate Resilience — – Saibal Chakraborty- Lee Tiedrich- Mike Haley- Srinivas Tallapragada – Saibal Chakraborty- Srinivas Tallapragada
S12
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — -C.V. Madhukar: Chief Executive Officer of CoDevelop, serving as the moderator for this session
S13
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-ai-in-digital-public-infrastructure-dpi-india-ai-impact-summit — economy. Saibal Chakraborty, Managing Director and Senior Partner, Boston Consulting Group. The moderator, C.V. Madhukar…
S14
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — – Dr. Hans Wijayasuriya- Robert Opp – Dr. Hans Wijayasuriya- Robert Opp- C.V. Madhukar – Dr. Hans Wijayasuriya- Saibal…
S15
African Priorities for the Global Digital Compact: A Comprehensive Discussion Report — As a core thing, inclusion is core. And for me, I look… at digital public infrastructure that is inclusive, that is su…
S16
https://dig.watch/event/india-ai-impact-summit-2026/indogerman-ai-collaboration-driving-economic-development-and-soc — AI systems. So at the end of the day, the aim is to translate the idea of trustworthy AI into testable criteria and prac…
S17
Building Trust through Transparency — Conversely, a different speaker emphasises the importance of cultivating integrity and promoting a mindset that values t…
S18
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-in-india-leadership-ethics-global-impact — Yeah, I think definitely the regulations are required, especially because AI can go berserk. And, you know, there are, I…
S19
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-in-india-leadership-ethics-global-impact-part1_2 — Yeah, I think definitely the regulations are required, especially because AI can go berserk. And, you know, there are, I…
S20
https://dig.watch/event/india-ai-impact-summit-2026/building-public-interest-ai-catalytic-funding-for-equitable-compute-access — And here, India is not waiting for permission. India is not waiting for permission. India is showing that it can be done…
S21
https://dig.watch/event/india-ai-impact-summit-2026/the-global-power-shift-indias-rise-in-ai-semiconductors — It has to be done and you already mentioned the opportunity, we were with the CEO of Medi today talking about 50 ,000 st…
S22
review article — In a world of sovereign nation states, health continues to be primarily a national responsibility; however, the intensif…
S23
DPI+H – health for all through digital public infrastructure — A global recognition of DPI’s foundational value in healthcare is apparent, though this acknowledgment is coupled with a…
S24
WS #83 the Relevance of Dpgs for Advancing Regional DPI Approaches — ## Areas of Consensus and Ongoing Challenges ## Key Challenges and Priorities ### India: Flexible Modular Architecture…
S25
Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65 — In summary, DPI enables citizens, entrepreneurs, and consumers to participate in society and markets. The Universal DPI …
S26
The Power of the Commons: Digital Public Goods for a More Secure, Inclusive and Resilient World — Eileen Donahoe: Great. First, let me congratulate the organizers here. This is a really remarkable event and it’s a ver…
S27
[Parliamentary Session 4] Fostering Inclusive Digital Innovation and Transformation — Audience: Hello. My question is for Robert. I’m Kundan from India. I work with a non-profit called CG Netswara. I’m co…
S28
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Robert Opp:Kalia. Yeah. Robert Opp:Thanks Denise. I think I’ll turn to Alison. Robert Opp:Thanks so much, Alain. Denis…
S29
AI Meets Agriculture Building Food Security and Climate Resilien — India is showing that we don’t have to repeat those early mistakes in digital also. By creating interoperable networks b…
S30
Collaborative AI Network – Strengthening Skills Research and Innovation — “We’re talking of AI being a possible DPI, a digital public infrastructure.”[1]. “I think those are aspects which a DPI …
S31
How Small AI Solutions Are Creating Big Social Change — African languages. And we just released a data set of 21 now, 27 voice languages, given that Africa has 2 ,000 or so lan…
S33
AI That Empowers Safety Growth and Social Inclusion in Action — “So we’ve engaged with member states and different stakeholders about their priorities, and let me bring to your attenti…
S34
Secure Finance Risk-Based AI Policy for the Banking Sector — Ajay Kumar Chaudhary opened by highlighting India’s opportunity to lead in AI development while managing associated risk…
S35
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — In the document and then in our trainings, we have four pillars. They’re all linked. The first pillar is context-based a…
S36
Future-Ready Education: Enhancing Accessibility &amp; Building | IGF 2023 — 8. The multistakeholder model is critical for inclusive decision-making. Inclusive decision-making requires input from m…
S37
Dynamic Coalition Collaborative Session — Security by design must be embedded from the beginning of development The speaker advocates for inclusive design princi…
S38
United Nations Office for Digital and Emerging Technologies — In his policy brief onA Global Digital Compact – an Open, Free and Secure Digital Future for All, the UN Secretary-Gener…
S39
Bridging the Digital Divide: Inclusive ICT Policies for Sustainable Development — Human rights | Legal and regulatory | Development Policy recommendations include fostering collaboration across sectors…
S40
WS #98 Universal Principles Local Realities Multistakeholder Pathways for DPI — Smriti Parsheera: They are such a pleasure to be a part of such a stellar panel. Let me just begin by introducing you kn…
S41
DPI High-Level Session — A transparent, accountable governance dedicated to DPI solutions is necessary, alongside secure funding to address any r…
S42
Panel Discussion Data Sovereignty India AI Impact Summit — High level of consensus with complementary perspectives rather than conflicting viewpoints. The implications suggest a m…
S43
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — Data is considered a critical asset for economic growth and sustainable development. It provides valuable insights for d…
S44
The Challenges of Data Governance in a Multilateral World — The argument is that digital sovereignty allows countries to have control over their own digital assets and data, which …
S45
How AI Drives Innovation and Economic Growth — Artificial intelligence | Financial mechanisms | Social and economic development Kremer explains that while private com…
S46
Shaping the Future AI Strategies for Jobs and Economic Development — These key comments transformed what could have been a superficial discussion about AI benefits into a sophisticated anal…
S47
Financing Broadband Networks of the Future to bridge digital — This bosoms as a regulatory obstruction, causing unused spectrum in rural regions. The analysis underlines calls for a s…
S48
Building Indias Digital and Industrial Future with AI — So last year, the bank came up with a digital public infrastructure and development report where it articulated what it …
S49
WS #238 Advancing financial inclusion through consumer-centric DPI — Audience: Thank you. Thank you. Thank you for the nice discussion in the nice presentations. I really like the way bot…
S50
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — Audience: Hi. Can you hear me? Yes. Please go ahead. Hi. My name is Marin. I am a researcher at IT4Change, which is an N…
S51
WSIS Action Line C7: E-health – Fostering foundations for digital health transformation in the age of AI — ## Background and Context Hani Eskandar: Yes. Okay, so I will really focus on one of the things that is very much in li…
S52
High Level Session 2: Digital Public Goods and Global Digital Cooperation — Human rights | Infrastructure | Legal and regulatory International cooperation essential for DPG success The framework…
S53
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — Early planning with safeguards is essential – inclusion must be the driving KPI rather than efficiency alone to prevent …
S54
United Nations Office for Digital and Emerging Technologies — In his policy brief onA Global Digital Compact – an Open, Free and Secure Digital Future for All, the UN Secretary-Gener…
S55
The future of Digital Public Infrastructure for environmental sustainability — The UNDP investigates DPI’s potential in driving a large-scale green transition by exploring payment schemes for environ…
S56
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — And we found very interesting picture. In some countries, demand is very high, supply is low. Some actually, we are deve…
S57
Open Forum #71 Advancing Rights-Respecting AI Governance and Digital Inclusion through G7 and G20 — Gilwald argues for reorienting DPI evaluation from traditional supplier-focused metrics to demand-side assessment that p…
S58
Agenda item 6: other matters/OEWG 2025 — – The United Kingdom proposed safeguards to ensure appropriate stakeholder involvement. Albania: Mr. Chair, Excellenci…
S59
WS #257 Data for Impact Equitable Sustainable DPI Data Governance — Digital Public Infrastructure (DPI) is a key driver of national digital transformation, fostering inclusive innovation a…
S60
Open Forum #76 Digital Literacy As a Precondition for Achieving Universal a — – Dr. Ibiso Kingsley-George Comprehensive policy framework requirements Comprehensive policy frameworks beyond basic b…
S61
The Foundation of AI Democratizing Compute Data Infrastructure — And it’s just false, and it’s false today as well. Our current technology is limited. It’s useful. There’s no question i…
S62
Diplomacy of small states — Small states recognise the valuable role thatmultilateral diplomacyplays in enhancing their engagement and amplifying th…
S63
Digital Ecosystems and Competition Law: Ecological Approach (HSE University) — Competition authorities in developing countries face resource constraints, but it is argued that they should intervene e…
S64
Building a Digital Society, from Vision to Implementation — Gary Patterson: Yes. Thanks. Thanks, Chris. So, as we said before, the small nations like Jamaica face these severe cons…
S65
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — GovTech sandboxes have emerged as a key component of Lithuania’s innovation ecosystem. These sandboxes, initiated in 201…
S66
Briefing on the Global Digital Compact- GDC (UNCTAD) — Furthermore, the importance of striking a balance between data protection and governance and the free flow of data for e…
S67
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Digital Public Infrastructure (DPI) standards:Standardisation became the linchpin for donor funding, vendor interoperabi…
S68
AI That Empowers Safety Growth and Social Inclusion in Action — “So we’ve engaged with member states and different stakeholders about their priorities, and let me bring to your attenti…
S69
Secure Finance Risk-Based AI Policy for the Banking Sector — Ajay Kumar Chaudhary opened by highlighting India’s opportunity to lead in AI development while managing associated risk…
S70
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — In the document and then in our trainings, we have four pillars. They’re all linked. The first pillar is context-based a…
S71
Building Public Interest AI Catalytic Funding for Equitable Compute Access — “computer capability collaboration connectivity compliance and context”[3]. “From these discussions, there were six foun…
S72
DPI High-Level Session — Zunaid Palak:Thank you. Thank you very much for having me here and giving me the opportunity to share some of our succes…
S73
Dynamic Coalition Collaborative Session — Rights of persons with disabilities | Development | Human rights principles Security by design must be embedded from th…
S74
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — Early planning with safeguards is essential – inclusion must be the driving KPI rather than efficiency alone to prevent …
S75
Open Forum #6 Promoting tech companies to ensure children’s online safety — Integrating safety considerations from the earliest stages of product design is crucial for protecting children online.
S76
United Nations Office for Digital and Emerging Technologies — In his policy brief onA Global Digital Compact – an Open, Free and Secure Digital Future for All, the UN Secretary-Gener…
S77
High Level Session 4: Securing Child Safety in the Age of the Algorithms — Safety by design and default is essential for child protection
S78
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — Infrastructure | Economic Success of Aadhaar, UPI, and data layer implementations that enabled various sector applicati…
S79
AI Meets Agriculture Building Food Security and Climate Resilien — India is showing that we don’t have to repeat those early mistakes in digital also. By creating interoperable networks b…
S80
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — Data is considered a critical asset for economic growth and sustainable development. It provides valuable insights for d…
S81
What is it about AI that we need to regulate? — TheDigital Identity workshopoffered concrete technical solutions, with Dr. Jimson suggesting federated databases where”c…
S82
Open Forum #18 World Economic Forum – Building Trustworthy Governance — 1. Effectively balancing data sovereignty with cross-border data flows The panel highlighted the challenges of balancin…
S83
Regulating Open Data_ Principles Challenges and Opportunities — Digital ecosystems simply do not function in silos. However, enabling data to move across borders should not mean that c…
S84
Safe and Responsible AI at Scale Practical Pathways — Prem Ramaswami from Google’s Data Commons project provided a complementary perspective on making public data accessible …
S85
Building Trustworthy AI Foundations and Practical Pathways — “But similarly now, econ of maybe writing novels is gone.”[20]. “The movie industry is worried.”[21]. “That entire econo…
S86
How AI Drives Innovation and Economic Growth — Artificial intelligence | Financial mechanisms | Social and economic development Kremer explains that while private com…
S87
The Foundation of AI Democratizing Compute Data Infrastructure — “So we are identifying agriculture, education, healthcare, and some more.”[83]. “So inspire them that they can really do…
S88
The Global Power Shift India’s Rise in AI &amp; Semiconductors — The discussion maintained an optimistic and forward-looking tone throughout, with speakers expressing confidence in Indi…
S89
AI Innovation in India — The tone was consistently celebratory, inspirational, and optimistic throughout the discussion. Speakers expressed pride…
S90
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — The discussion maintained a consistently optimistic and collaborative tone throughout. Speakers expressed enthusiasm and…
S91
The Innovation Beneath AI: The US-India Partnership powering the AI Era — The tone was consistently optimistic and forward-looking throughout, with panelists expressing excitement about opportun…
S92
Session — The tone was primarily analytical and forward-looking, with the speaker presenting evidence-based predictions while ackn…
S93
Webinar session — The discussion maintained a diplomatic and constructive tone throughout, with participants demonstrating nuanced thinkin…
S94
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S95
Resilient infrastructure for a sustainable world — The tone was professional and collaborative throughout, with speakers building on each other’s points constructively. Th…
S96
Delegated decisions, amplified risks: Charting a secure future for agentic AI — The tone was consistently critical and cautionary throughout, with Whittaker maintaining a technically informed but acce…
S97
Panel Discussion Inclusion Innovation &amp; the Future of AI — The discussion maintained a constructive and collaborative tone throughout, with panelists building on each other’s poin…
S98
AI for Social Good Using Technology to Create Real-World Impact — The tone was consistently optimistic and collaborative throughout, with speakers demonstrating genuine enthusiasm for AI…
S99
Open Forum #68 WSIS+20 Review and SDGs: A Collaborative Global Dialogue — The discussion maintained a constructive and collaborative tone throughout, characterized by cautious optimism balanced …
S100
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — The discussion maintained a collaborative and solution-oriented tone throughout, with participants building on each othe…
S101
AI as critical infrastructure for continuity in public services — The discussion maintained a collaborative and constructive tone throughout, with participants building on each other’s p…
S102
Leaders TalkX: Future-ready: enhancing skills for a digital tomorrow — The discussion maintained a consistently positive, collaborative, and inspiring tone throughout. Panelists were enthusia…
S103
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — The discussion maintained a collaborative and constructive tone throughout, characterized by knowledge sharing and mutua…
S104
Empowering People with Digital Public Infrastructure — Benefits and Potential of DPI Rene Saul: You still need to maintain those rules so that you actually protect the sanct…
S105
A Digital Future for All (morning sessions) — Achim Steiner: Isn’t it amazing? This is all happening already. And congratulations just to three more pioneers. In …
S106
Building the AI-Ready Future From Infrastructure to Skills — The tone was consistently optimistic and collaborative throughout, with speakers expressing excitement about AI’s potent…
S107
Workshop 11: São Paulo Multistakeholder Guidelines – The Way Forward in Multistakeholder and Multilateral Digital Processes — – **Remote moderator (Frances)** – Remote session moderator Wolfgang Kleinwächter, serving as session moderator and int…
S109
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S110
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — These findings provide valuable insights into India’s approach to DPI.
S111
https://dig.watch/event/india-ai-impact-summit-2026/ai-algorithms-and-the-future-of-global-diplomacy — I think the counselor did allude to industrial AI. That’s a fantastic use case of cooperation where you and India could …
S112
https://dig.watch/event/india-ai-impact-summit-2026/from-innovation-to-impact_-bringing-ai-to-the-public — I mean, literally, PTM, we both of us, put more than 10 ,000 of you, put 25 ,000 crore on the table for making this humb…
S113
Leveraging AI4All_ Pathways to Inclusion — It’s not just good karma. It’s not just charity. It’s good business. So I think those are kind of the two philosophies I…
S114
High-level AI Standards panel — Four key elements for collaboration: translate, structure, include, and connect
S115
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — What is interesting is India is uniquely positioned in this global AI discourse. Most global AI frameworks are designed …
S116
UNSC meeting: Artificial intelligence, peace and security — Brazil:Thank you, Mr. President, Mr. President, dear colleagues. I thank the Secretary General for his briefing today an…
S117
Main Session 2: The governance of artificial intelligence — Kakkar stressed the importance of meaningful multi-stakeholder participation and strengthening mechanisms like the Inter…
S118
WS #283 AI Agents: Ensuring Responsible Deployment — User control and human oversight are essential safeguards, particularly for high-impact decisions that are difficult to …
S119
Leaders TalkX: Towards a safer connected world: collaborative strategies to strengthen digital trust and cyber resilience — Fahmi Fadzil: Thank you. Assalamualaikum, good morning, bonjour. I was following very closely the speech given by Presid…
S120
Host Country Open Stage — This paradoxical statement challenges the typical understanding of digital sovereignty as protectionist or isolationist….
S121
Panel #3: « Gouverner les données : entre souveraineté, éthique et sécurité à l’ère de l’interconnexion » — Antoine Barbry Merci. Merci monsieur Bartholin, pour cet éclairage sur ces conventions très importantes qui sont dévelop…
S122
Defence against the DarkWeb Arts: Youth Perspective | IGF 2023 WS #72 — Investing in technological sovereignty is crucial for nations to have control over their internet space. This involves d…
S123
IGF to GDC- An Equitable Framework for Developing Countries | IGF 2023 Open Forum #46 — Audience:Good morning. My name is Mahesh Perra from Sri Lanka, a small island in the South Asia region. Actually, we hav…
S124
(Interactive Dialogue 1) Summit of the Future – General Assembly, 79th session — – Permanent Representative of Sri Lanka Seychelles: Ladies and gentlemen, I am honored to speak today on an important …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Dr. Hans Wijayasuriya
5 arguments134 words per minute1135 words505 seconds
Argument 1
Inclusion imperative (Dr. Hans Wijayasuriya)
EXPLANATION
Hans stresses that any government‑led AI deployment must first ensure it does not widen existing inequalities. Inclusion means using AI and DPI to reduce divides through features like voice‑first interfaces, translation, multimodality, and human‑in‑the‑loop mechanisms.
EVIDENCE
He explains that inclusion requires new capabilities not to increase divides and cites AI-enabled voice-first, translation, and multimodal services, as well as the need for a human in the loop, as ways DPI can broaden access [20-25].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emphasis on inclusion as core to digital public infrastructure is echoed in the African Priorities report, which calls inclusion a key priority for citizen-centered DPI [S15], and the release of multilingual voice datasets to reach rural users supports the voice-first, multimodal approach [S31].
MAJOR DISCUSSION POINT
Inclusion imperative (Dr. Hans Wijayasuriya)
AGREED WITH
Robert Opp
Argument 2
Integrity foundation (Dr. Hans Wijayasuriya)
EXPLANATION
Hans argues that AI should be layered on top of a mature DPI foundation rather than redefining it. Core elements such as clean data, robust data architectures, reliable APIs, and institutional capacity must be established first.
EVIDENCE
He outlines that DPI foundations include clean data, data maturity, clean registers, secure APIs, and institutional capacity, which must precede AI integration [28-34].
MAJOR DISCUSSION POINT
Integrity foundation (Dr. Hans Wijayasuriya)
AGREED WITH
Saibal Chakraborty, Sangbu Kim
DISAGREED WITH
Saibal Chakraborty
Argument 3
Safeguards requirement (Dr. Hans Wijayasuriya)
EXPLANATION
Hans highlights the need for robust safeguards when deploying AI, focusing on bias detection, consent augmentation, explainability, and human oversight. He warns that unchecked bias or opacity at scale can cause widespread harm.
EVIDENCE
He details safeguards such as bias detection, AI-generated consent, explainability, and human-in-the-loop, emphasizing that bias and opacity at scale would cause harm [37-43].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for robust safeguards align with discussions on trustworthy AI criteria and the need for regulation to prevent AI misbehaviour, as highlighted in the trustworthy AI criteria report [S16] and examples of AI-driven transaction checks [S18][S19], as well as the universal DPI safeguards initiative [S25].
MAJOR DISCUSSION POINT
Safeguards requirement (Dr. Hans Wijayasuriya)
AGREED WITH
Robert Opp
DISAGREED WITH
Robert Opp
Argument 4
Sovereignty capability (Dr. Hans Wijayasuriya)
EXPLANATION
Hans defines sovereignty as the ability to maintain neutral, vendor‑agnostic capabilities rather than isolation. Control over data classification, privacy, and core technologies enables governments to choose and manage AI‑enhanced DPI safely.
EVIDENCE
He describes sovereignty as building neutral capability across vendors, cloud, and technologies, with control over data classification, protection, and privacy, positioning AI as an accelerator on top of DPI [44-45].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for neutral, vendor-agnostic capabilities and control over data is reflected in calls for regulatory checks on AI systems [S18][S19] and India’s compute capacity plan that builds sovereign AI infrastructure [S20].
MAJOR DISCUSSION POINT
Sovereignty capability (Dr. Hans Wijayasuriya)
AGREED WITH
Saibal Chakraborty
DISAGREED WITH
Saibal Chakraborty
Argument 5
Challenges of scale, talent retention and modular DPI advantage (Dr. Hans Wijayasuriya)
EXPLANATION
Hans reflects on the constraints small nations face, such as limited AI infrastructure and talent retention, but notes that modular DPI allows flexible, rapid AI integration. He sees AI adding customized, low‑cost services built on a mature DPI foundation.
EVIDENCE
He cites Sri Lanka’s need for sovereign AI infrastructure, talent challenges, and the benefit of modular DPI that enables AI to deliver billions of scenarios and customized citizen experiences [146-158].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Questions about countries’ mix of compute, talent and data echo the challenges of scale discussed in the panel [S10], while India’s flexible modular architecture is highlighted as a way to overcome such constraints [S24].
MAJOR DISCUSSION POINT
Challenges of scale, talent retention and modular DPI advantage (Dr. Hans Wijayasuriya)
AGREED WITH
Sangbu Kim
R
Robert Opp
3 arguments170 words per minute854 words300 seconds
Argument 1
Universal safeguards framework (Robert Opp)
EXPLANATION
Robert explains that UNDP, with partners like Co‑Develop and the Gates Foundation, has created a universal DPI safeguards framework to guide countries. The framework is now being piloted in several national implementations.
EVIDENCE
He notes the collaborative effort that produced a universal DPI safeguards framework, supported by Co-Develop and Gates Foundation, and its rollout in multiple countries over the past year and a half [56-58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The existence of a universal DPI safeguards framework is confirmed in the panel discussion notes and the Universal DPI Safeguards initiative documentation [S9][S25].
MAJOR DISCUSSION POINT
Universal safeguards framework (Robert Opp)
AGREED WITH
Dr. Hans Wijayasuriya
DISAGREED WITH
Dr. Hans Wijayasuriya
Argument 2
Inclusion‑by‑design KPI (Robert Opp)
EXPLANATION
Robert stresses that inclusion should be a primary key performance indicator when designing DPI and AI systems. Early, design‑time focus on inclusion prevents exclusionary outcomes later on.
EVIDENCE
He argues that starting safeguards discussions early leads to better inclusion outcomes, and that if inclusion is the driving KPI, planning must begin with people in mind, contrasting it with efficiency-only approaches [63-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The centrality of inclusion in DPI design is stressed in the African Priorities report, which calls for inclusive, citizen-centered infrastructure [S15], and the multilingual voice dataset initiative further illustrates design-time inclusion efforts [S31].
MAJOR DISCUSSION POINT
Inclusion‑by‑design KPI (Robert Opp)
AGREED WITH
Dr. Hans Wijayasuriya
Argument 3
Upskilling, foundation models and 100 use‑case pathways (Robert Opp)
EXPLANATION
Robert outlines UNDP’s three‑layer strategy: internal capacity building through upskilling and foundation‑model access; embedding AI across thematic verticals; and supporting partner countries via 100 diffusion pathways. The pathways aim to scale responsible AI use cases.
EVIDENCE
He describes internal upskilling programs, making foundation-model capabilities available, embedding AI in sectors like environment and governance, and the announced partnership to develop 100 use-case pathways for responsible AI diffusion [168-174][175-179].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Robert’s three-layer strategy of capacity building, foundation-model access and diffusion pathways is mentioned in the panel discussion overview and aligns with the need for ecosystem elements such as compute and talent [S9][S10].
MAJOR DISCUSSION POINT
Upskilling, foundation models and 100 use‑case pathways (Robert Opp)
AGREED WITH
Saibal Chakraborty
S
Sangbu Kim
2 arguments108 words per minute474 words263 seconds
Argument 1
DPI as interoperability backbone for AI (Sangbu Kim)
EXPLANATION
Sangbu argues that DPI provides essential interoperability, especially in the AEI era, enabling user‑centric services. It acts as a critical tool for ensuring seamless data exchange across platforms.
EVIDENCE
He points out that DPI ensures interoperability, supports user-centricity, and is a key tool for the AEI era, contrasting it with earlier mobile-era approaches [81-89].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of DPI in providing interoperable, open-protocol networks for AI services is described in the discussion of open protocols like Beacon and collaborative AI networks [S29][S30].
MAJOR DISCUSSION POINT
DPI as interoperability backbone for AI (Sangbu Kim)
AGREED WITH
Dr. Hans Wijayasuriya
Argument 2
Demand creation through small‑AI use cases (Sangbu Kim)
EXPLANATION
Sangbu notes that while network coverage is high, demand for AI‑driven services is low. He proposes generating demand via government programmes and small‑AI use cases that are user‑centric and value‑driven.
EVIDENCE
He cites over 90 % mobile-tower coverage in Sub-Saharan Africa, the lack of demand, and the shift toward creating demand through government programmes and small-AI use cases that are user-centric [182-190].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Efforts to reach underserved users through voice-first, multilingual datasets illustrate demand-creation strategies for AI services in rural contexts [S31], and the panel highlighted low demand despite high coverage, prompting small-AI pilots [S9].
MAJOR DISCUSSION POINT
Demand creation through small‑AI use cases (Sangbu Kim)
DISAGREED WITH
Saibal Chakraborty
S
Saibal Chakraborty
3 arguments155 words per minute978 words377 seconds
Argument 1
Open population‑scale DPI spurs private‑sector unicorns (Saibal Chakraborty)
EXPLANATION
Saibal highlights that India’s open, population‑scale DPIs like Aadhaar and UPI have catalysed a surge of private‑sector innovation, resulting in over 120 unicorns that leverage these infrastructures. The openness creates a fertile ground for startups.
EVIDENCE
He recounts the evolution from Aadhaar to UPI, describing them as open population-scale software that triggered massive innovation, noting that India now hosts 120 unicorns, each leveraging DPIs [99-103].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s ecosystem of 50,000 startups and the emergence of over 120 unicorns leveraging Aadhaar and UPI demonstrates the catalytic effect of open, population-scale DPI [S21].
MAJOR DISCUSSION POINT
Open population‑scale DPI spurs private‑sector unicorns (Saibal Chakraborty)
Argument 2
Shared AI infrastructure, affordable compute and targeted funding (Saibal Chakraborty)
EXPLANATION
Saibal explains that AI is being treated as a shared public infrastructure in India, with cheap compute resources (e.g., 38,000 GPUs at <$1/hr) and government data access via platforms like AI Coach. Funding mechanisms such as fund‑of‑funds aim to direct venture capital toward socially sensitive sectors.
EVIDENCE
He details the AI Coach platform, the availability of 38,000 GPUs at less than a dollar per hour, early government data access through AI Coach and TGDX, and the creation of fund-of-funds to channel VC investment into underserved sectors [122-130].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s compute capacity plan provides 38,000 GPUs at sub-$1/hr, constituting shared, low-cost AI infrastructure, and fund-of-funds mechanisms aim to channel VC into socially relevant sectors [S20][S21].
MAJOR DISCUSSION POINT
Shared AI infrastructure, affordable compute and targeted funding (Saibal Chakraborty)
AGREED WITH
Robert Opp
DISAGREED WITH
Sangbu Kim
Argument 3
Controlled government data sharing & accountable institutions (Saibal Chakraborty)
EXPLANATION
Saibal stresses the need for policies that balance sovereign data protection with controlled data sharing to innovators. He recommends establishing accountable institutions at both central and state levels, citing Telangana’s Section 8 public‑sector undertaking as a model.
EVIDENCE
He discusses the tension between sovereign data protection and the need for data exposure, the necessity of controlled access, and the example of Telangana setting up a Section 8 PSU to drive AI and ensure agility [198-205].
MAJOR DISCUSSION POINT
Controlled government data sharing & accountable institutions (Saibal Chakraborty)
AGREED WITH
Dr. Hans Wijayasuriya
DISAGREED WITH
Dr. Hans Wijayasuriya
C
C.V. Madhukar
1 argument139 words per minute1254 words538 seconds
Argument 1
India as a pragmatic, optimistic middle ground between US and Chinese models (C.V. Madhukar)
EXPLANATION
Madhukar frames India’s AI stance as neither chasing AGI like the US nor succumbing to despondency, but as a pragmatic, optimistic nation seeking to adopt the best of multiple models. He notes this optimism has been evident throughout the summit.
EVIDENCE
He contrasts the US focus on AGI and job worries with India’s approach of evaluating Chinese, American, and other models to harness AI for national benefit, describing the palpable optimism over the past days [5-11].
MAJOR DISCUSSION POINT
India as a pragmatic, optimistic middle ground between US and Chinese models (C.V. Madhukar)
S
Speaker 1
1 argument74 words per minute129 words104 seconds
Argument 1
Gratitude, continuation of expo and emphasis on ongoing engagement (Speaker 1)
EXPLANATION
Speaker 1 thanks all participants, announces the distribution of a memento, and reminds the audience that the expo will remain open, encouraging continued interaction beyond the summit.
EVIDENCE
He thanks the speakers, mentions a memento from the organizing team, notes that the expo will be open the next day, and expresses appreciation for everyone’s involvement [220-226].
MAJOR DISCUSSION POINT
Gratitude, continuation of expo and emphasis on ongoing engagement (Speaker 1)
Agreements
Agreement Points
Inclusion must be central and addressed early in DPI/AI design
Speakers: Dr. Hans Wijayasuriya, Robert Opp
Inclusion imperative (Dr. Hans Wijayasuriya) Inclusion‑by‑design KPI (Robert Opp)
Both speakers stress that inclusion should be a primary consideration when designing digital public infrastructure and AI systems, and that safeguards need to be discussed at the design stage to avoid exclusion at scale [20-25][63-66].
POLICY CONTEXT (KNOWLEDGE BASE)
The India AI Impact Summit highlighted that inclusion should be the primary KPI in DPI design, emphasizing early safeguards to avoid exclusion [S53].
Robust safeguards (bias detection, explainability, human‑in‑the‑loop) are essential for AI‑enabled DPI
Speakers: Dr. Hans Wijayasuriya, Robert Opp
Safeguards requirement (Dr. Hans Wijayasuriya) Universal safeguards framework (Robert Opp)
Both highlight the need for a safeguards framework covering bias detection, consent augmentation, explainability and human oversight, noting that such safeguards must be embedded early in the DPI lifecycle [37-43][56-58].
POLICY CONTEXT (KNOWLEDGE BASE)
Panel discussions stressed the need for bias detection, explainability and human-in-the-loop as core safeguards, and the UK’s proposal for stakeholder safeguards reinforces this requirement [S53][S58].
AI should be layered on top of a mature DPI foundation rather than replace it
Speakers: Dr. Hans Wijayasuriya, Saibal Chakraborty, Sangbu Kim
Integrity foundation (Dr. Hans Wijayasuriya) Shared AI infrastructure, affordable compute and targeted funding (Saibal Chakraborty) DPI as interoperability backbone for AI (Sangbu Kim)
All agree that a solid DPI base-clean data, robust APIs, institutional capacity-is prerequisite, with AI acting as an accelerator built on that infrastructure [28-34][122-124][81-89].
Modular, interoperable DPI enables rapid, customized AI service delivery
Speakers: Dr. Hans Wijayasuriya, Sangbu Kim
Challenges of scale, talent retention and modular DPI advantage (Dr. Hans Wijayasuriya) DPI as interoperability backbone for AI (Sangbu Kim)
Both note that a modular, interoperable DPI allows flexible, scalable AI integration, delivering billions of scenario-specific services and customized citizen experiences [146-158][81-89].
POLICY CONTEXT (KNOWLEDGE BASE)
ITU and World Bank efforts to create modular DPI standards facilitate interoperability and rapid AI service deployment, positioning modular DPI as a catalyst for customized services [S67].
Affordable compute and shared AI platforms are critical for private‑sector innovation
Speakers: Saibal Chakraborty, Robert Opp
Shared AI infrastructure, affordable compute and targeted funding (Saibal Chakraborty) Upskilling, foundation models and 100 use‑case pathways (Robert Opp)
Both emphasize that low-cost GPU access and foundation-model availability, coupled with capacity-building programmes, are essential to enable startups and responsible AI diffusion [122-126][168-173][175-179].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of AI democratization stress that affordable compute and shared platforms are essential to enable private-sector innovators to build AI applications [S61].
Governments need accountable institutions that balance data sovereignty with controlled data sharing for innovation
Speakers: Saibal Chakraborty, Dr. Hans Wijayasuriya
Controlled government data sharing & accountable institutions (Saibal Chakraborty) Sovereignty capability (Dr. Hans Wijayasuriya)
Both argue that neutral, vendor-agnostic capabilities and accountable institutions are required to protect sovereignty while providing regulated data access to innovators [44-45][198-205][209-212].
POLICY CONTEXT (KNOWLEDGE BASE)
The Global Digital Compact and DPI governance literature call for institutions that safeguard data sovereignty while enabling controlled sharing to foster innovation [S59][S66].
Similar Viewpoints
Both see inclusion and safeguards as foundational pillars that must be embedded from the outset of DPI and AI projects to avoid exclusion and harm at scale [20-25][63-66][37-43][56-58].
Speakers: Dr. Hans Wijayasuriya, Robert Opp
Inclusion imperative (Dr. Hans Wijayasuriya) Inclusion‑by‑design KPI (Robert Opp) Safeguards requirement (Dr. Hans Wijayasuriya) Universal safeguards framework (Robert Opp)
Both view DPI as the essential backbone that, when combined with affordable shared AI resources, can unlock large‑scale, user‑centric services and innovation [122-124][81-89].
Speakers: Saibal Chakraborty, Sangbu Kim
Shared AI infrastructure, affordable compute and targeted funding (Saibal Chakraborty) DPI as interoperability backbone for AI (Sangbu Kim)
Both stress that building internal capacity (upskilling, access to foundation models) and providing low‑cost compute are key levers for scaling responsible AI use cases across sectors [122-126][168-173][175-179].
Speakers: Saibal Chakraborty, Robert Opp
Shared AI infrastructure, affordable compute and targeted funding (Saibal Chakraborty) Upskilling, foundation models and 100 use‑case pathways (Robert Opp)
Unexpected Consensus
Modular DPI as a strategic advantage for small, resource‑constrained nations
Speakers: Dr. Hans Wijayasuriya, Sangbu Kim
Challenges of scale, talent retention and modular DPI advantage (Dr. Hans Wijayasuriya) DPI as interoperability backbone for AI (Sangbu Kim)
It is notable that a small island nation (Sri Lanka) and a global development bank (World Bank) independently converge on the view that modular, interoperable DPI can overcome limited resources and enable rapid AI integration, despite their different institutional contexts [146-158][81-89].
POLICY CONTEXT (KNOWLEDGE BASE)
Case studies of Jamaica and other small states highlight modular DPI as a way to overcome limited resources and achieve digital development goals [S64][S62].
Overall Assessment

The panel shows strong convergence around four core themes: (1) inclusion and safeguards must be embedded early; (2) a mature, interoperable DPI foundation is prerequisite for AI; (3) affordable compute and shared AI platforms are essential for private‑sector and development‑partner innovation; (4) governments need accountable, neutral institutions to balance sovereignty with data sharing. These agreements cut across digital inclusion, AI governance, data governance and the enabling environment for digital development.

High consensus – most speakers, regardless of sector (government, multilateral, private), articulate the same set of principles, indicating a shared roadmap for scaling AI‑enabled DPI that can guide policy, investment and capacity‑building efforts.

Differences
Different Viewpoints
Sequencing of AI integration with DPI
Speakers: Dr. Hans Wijayasuriya, Saibal Chakraborty
Integrity foundation (Dr. Hans Wijayasuriya) Shared AI infrastructure, affordable compute and targeted funding (Saibal Chakraborty)
Hans argues that AI should be layered on top of a mature DPI foundation and must not redefine DPI, stressing clean data, robust APIs and institutional capacity before AI can be added [28-34]. Saibal treats AI as a shared public infrastructure comparable to DPI, highlighting affordable compute and government data platforms as co-foundational elements that can be deployed alongside DPI [122-124]. This reflects a disagreement on whether AI is an add-on to existing DPI or a parallel, co-foundational layer.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates at the AI Impact Summit and IGF emphasize the need to decide whether AI should precede or follow DPI rollout, reflecting unresolved sequencing challenges [S53].
Readiness of universal safeguards framework
Speakers: Robert Opp, Dr. Hans Wijayasuriya
Universal safeguards framework (Robert Opp) Safeguards requirement (Dr. Hans Wijayasuriya)
Robert states that a universal DPI safeguards framework has been created with partners and is now being piloted in several countries [56-58]. Hans, while emphasizing the need for safeguards, notes that the ecosystem is still being assembled and that a “perfect experience” is not yet achieved, implying the framework is not fully mature [45-47]. This shows a disagreement on the current maturity and implementation status of safeguards.
POLICY CONTEXT (KNOWLEDGE BASE)
UK-proposed safeguards and calls for early planning indicate that a universally ready safeguards framework is still under development [S58][S53].
Balance between data openness and sovereign control
Speakers: Saibal Chakraborty, Dr. Hans Wijayasuriya
Controlled government data sharing & accountable institutions (Saibal Chakraborty) Sovereignty capability (Dr. Hans Wijayasuriya)
Saibal calls for policies that enable controlled sharing of government data with innovators while protecting sovereignty, recommending accountable institutions at central and state levels [203-205][211-212]. Hans defines sovereignty as maintaining neutral, vendor-agnostic capability and control over data classification and privacy, emphasizing capability over openness [44-45]. The two positions differ on the extent and manner of opening government data.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs on the Global Digital Compact discuss the tension between open data flows and national sovereignty, urging balanced governance [S59][S66].
Demand creation versus supply‑side AI enablement
Speakers: Sangbu Kim, Saibal Chakraborty
Demand creation through small‑AI use cases (Sangbu Kim) Shared AI infrastructure, affordable compute and targeted funding (Saibal Chakraborty)
Sangbu points out that despite high network coverage, demand for AI services is low and proposes generating demand through government-driven small-AI use cases that are user-centric [184-190]. Saibal focuses on making compute affordable and providing data platforms (e.g., AI Coach) to enable startups, assuming that supply of infrastructure will drive uptake [124-126]. The disagreement lies in whether to prioritize demand generation or supply-side enablement first.
POLICY CONTEXT (KNOWLEDGE BASE)
G7/G20 dialogues and analyses of global AI demand-supply gaps argue for shifting focus from supply-centric AI provision to demand-driven public value creation [S57][S56].
Unexpected Differences
Maturity of the universal safeguards framework
Speakers: Robert Opp, Dr. Hans Wijayasuriya
Universal safeguards framework (Robert Opp) Safeguards requirement (Dr. Hans Wijayasuriya)
It is surprising that Robert, representing UNDP, claims the framework is already being implemented in multiple countries [56-58], while Hans, a senior government official, suggests the safeguards ecosystem is still being assembled and far from a “perfect experience” [45-47]. The divergence between an international development agency and a national government on the same safeguard initiative was not anticipated.
POLICY CONTEXT (KNOWLEDGE BASE)
Stakeholder discussions note that safeguards frameworks are still evolving, with maturity levels varying across jurisdictions [S58][S53].
Sequencing of AI and DPI development
Speakers: Dr. Hans Wijayasuriya, Saibal Chakraborty
Integrity foundation (Dr. Hans Wijayasuriya) Shared AI infrastructure, affordable compute and targeted funding (Saibal Chakraborty)
Both speakers are leading experts in AI‑enabled development, yet they hold opposite views on whether AI should be an add‑on to a pre‑existing DPI (Hans) or a co‑foundational public infrastructure deployed in parallel (Saibal). This contrast in strategic sequencing was not expected given their shared focus on national AI missions.
POLICY CONTEXT (KNOWLEDGE BASE)
The same sequencing debate appears in multiple forums, indicating lack of consensus on the optimal order of AI and DPI rollout [S53].
Overall Assessment

The panel showed strong consensus on the importance of inclusion, safeguards and the catalytic potential of AI for development. However, key disagreements emerged around (i) whether AI should be layered on mature DPI foundations or built as a co‑foundational public infrastructure, (ii) the current maturity of a universal DPI safeguards framework, (iii) the balance between opening government data and maintaining sovereign control, and (iv) whether to prioritize demand creation or supply‑side enablement of AI services.

Overall disagreement was moderate. Most participants shared common goals, but they diverged on implementation pathways and timing. These divergences could affect coordination among governments, multilateral agencies and the private sector, potentially slowing the rollout of AI‑enhanced DPI unless reconciled through joint policy frameworks and shared roadmaps.

Partial Agreements
Both agree that inclusion must be a core priority for DPI and AI projects. Hans emphasizes inclusion through voice‑first, translation and multimodal services [20-25], while Robert stresses making inclusion a key performance indicator and embedding it early in design [63-66]. They differ on the operational focus—technology‑specific features versus KPI‑driven planning.
Speakers: Dr. Hans Wijayasuriya, Robert Opp
Inclusion imperative (Dr. Hans Wijayasuriya) Inclusion‑by‑design KPI (Robert Opp)
Both see AI as an accelerator for public services. Hans describes AI as a scaffolding that can deliver super‑experience once DPI foundations are mature [35-36][44-45]. Saibal describes AI as a shared public infrastructure that can be accessed cheaply to spur innovation [122-124]. They agree on AI’s catalytic role but differ on whether safeguards and DPI maturity must precede AI deployment or can be built concurrently.
Speakers: Dr. Hans Wijayasuriya, Saibal Chakraborty
Safeguards requirement (Dr. Hans Wijayasuriya) Shared AI infrastructure, affordable compute and targeted funding (Saibal Chakraborty)
Takeaways
Key takeaways
Governments must prioritize inclusion, integrity, safeguards, and sovereignty when integrating AI with Digital Public Infrastructure (DPI). Robust DPI foundations—clean data, mature architectures, reliable APIs, and institutional capacity—must be established before AI is layered on top. AI can dramatically improve citizen experience through advanced API orchestration and the ability to handle billions of unconstrained service scenarios. UNDP stresses early, inclusion‑by‑design safeguards and has created a universal DPI safeguards framework to guide countries. The World Bank views DPI as the interoperability backbone for AI and emphasizes creating demand via small‑AI use cases rather than relying on existing data silos. India’s DPI journey (Aadhaar, UPI) demonstrates how open, population‑scale infrastructure fuels private‑sector innovation; AI is being treated as a shared public infrastructure with affordable compute and targeted funding for underserved sectors. Policy must balance controlled government data sharing with innovation needs, establishing accountable institutions at both central and sub‑national levels. Small nations can leverage modular DPI and AI for customized services but face challenges in talent retention, scale, and building sovereign AI infrastructure. UNDP is building internal AI capacity, upskilling staff, and launching a partnership (Xstep) to develop 100 responsible AI use‑case pathways. India adopts a pragmatic, optimistic stance, seeking to blend lessons from US, Chinese, and other models rather than committing to a single approach.
Resolutions and action items
UNDP to roll out the universal DPI safeguards framework in partner countries, embedding inclusion and bias‑detection from the design phase. UNDP announced a partnership with Xstep to develop the “100 Pathways” initiative, a use‑case driven approach to scaling responsible AI. World Bank to promote demand creation for AI through small‑AI use cases and to enhance DPI interoperability across regions. India (via BCG and government initiatives) to expand AI platforms such as AI Coach and state‑level TGDX, providing affordable GPU compute and curated government data access. Recommendation for governments to establish accountable, possibly statutory, institutions (e.g., Section 8 public sector undertakings) to govern AI policy, data sharing, and safeguards. Encouragement for countries to embed inclusion‑by‑design KPIs and safeguard considerations early in DPI and AI projects.
Unresolved issues
Specific mechanisms for controlled government data sharing that protect sovereignty while enabling private‑sector AI innovation remain undefined. Effective strategies to generate sustained demand for AI services in low‑demand regions, particularly sub‑Saharan Africa, are still open. Concrete funding models and incentives to channel venture capital into socially sensitive sectors (climate, education, MSMEs) need further development. Metrics and measurement frameworks for tracking inclusion impact and safeguard effectiveness have not been detailed. Details on how small nations will scale sovereign AI infrastructure beyond modular DPI, especially regarding talent pipelines and long‑term sustainability, were not fully resolved.
Suggested compromises
Adopt a controlled data‑sharing approach that provides innovators access to valuable government data while maintaining sovereignty and privacy safeguards. Combine supplier‑centric and user‑centric models: use DPI as a neutral, interoperable platform that supports user‑centric AI applications without locking into a single vendor. Implement human‑in‑the‑loop, explainability, and bias‑detection safeguards to balance AI scalability with ethical risk mitigation.
Thought Provoking Comments
India is saying, look, there will be a Chinese model, there will be an American model, there will be a whole bunch of other innovations that are going on. What do we do to embrace and use all of this for our benefit?
Frames India’s unique, optimistic stance on AI, contrasting it with the US focus on AGI and job‑privacy anxieties, and sets the tone for a discussion about leveraging multiple global models rather than being locked into a single narrative.
Shifted the conversation from a generic AI debate to a India‑centric opportunity narrative, prompting panelists to discuss how their respective institutions can adopt a pluralistic, benefit‑driven approach.
Speaker: C.V. Madhukar
AI will not redefine DPI. The DPI foundations must be in place first – clean data, mature architectures, reliable APIs, institutional capacity – and then AI is applied as a scaffolding to accelerate delivery.
Introduces a clear hierarchy of priorities, emphasizing that robust digital public infrastructure is a prerequisite for responsible AI, and highlights the risk of building AI on weak foundations.
Guided subsequent speakers (Robert, Saibal, Sangbu) to focus on safeguards, data quality, and institutional readiness, and anchored the discussion around concrete foundational steps rather than speculative AI hype.
Speaker: Dr. Hans Wijayasuriya
The earlier you start discussing safeguards, the better off you’ll be down the road in terms of inclusion. If efficiency is your only metric, you’ll rush ahead and leave people out.
Challenges a common development mindset that prioritizes speed and efficiency over equity, insisting that inclusion must be a primary KPI from the outset.
Prompted a deeper exploration of inclusion by other panelists, leading to concrete examples (voice‑first, multilingual platforms) and reinforcing the need for early‑stage safeguard frameworks.
Speaker: Robert Opp
DPI is exactly the tool to ensure user‑centricity in the AEI era; without good tools and interoperability, we cannot fully support user‑customized services.
Links the evolution of computing eras to a shift from supplier‑centric to user‑centric models, positioning DPI as the essential bridge for AI‑enabled personalization.
Steered the conversation toward the practical role of DPI in delivering AI‑driven services, and set up the later discussion on how AI can rapidly upgrade DPI platforms.
Speaker: Sangbu Kim
We are treating AI as a shared public infrastructure, just like DPI was. Over 38,000 GPUs are now available at less than $1 per hour, making compute affordable for startups.
Presents a concrete policy and ecosystem model—publicly provisioned compute and data—to democratize AI innovation, mirroring the successful DPI model.
Introduced the idea of AI as a public good, influencing later remarks on funding mechanisms (fund‑of‑funds) and prompting discussion on how governments can replicate DPI‑style infrastructure for AI.
Speaker: Saibal Chakraborty
Being a small country can be a strength: we can implement modular systems with laser‑sharp focus, using AI on top of a solid DPI to deliver citizen‑specific experiences via digital twins.
Turns the perceived disadvantage of size into an advantage, highlighting agility and modularity as strategic assets for AI adoption in smaller economies.
Provided a nuanced perspective that balanced earlier concerns about resource constraints, encouraging other speakers to consider tailored, modular AI solutions rather than one‑size‑fits‑all approaches.
Speaker: Dr. Hans Wijayasuriya
We announced a partnership to identify 100 ‘Diffusion Pathways’ – use‑case driven approaches to scaling responsible AI across sectors.
Moves the conversation from abstract safeguards to a concrete, actionable roadmap for responsible AI deployment, emphasizing use‑case diversity and scalability.
Shifted the dialogue toward implementation strategies, inspiring other panelists to think about measurable pathways and concrete pilots rather than only high‑level principles.
Speaker: Robert Opp
Policy must walk the tightrope: expose valuable government data in a controlled manner to innovators while protecting sovereignty and safety; create accountable institutions at state level to drive AI.
Highlights the delicate balance between data openness and national security, and proposes institutional solutions (e.g., Section 8 PSU) to operationalize this balance.
Deepened the policy discussion, prompting acknowledgment of federal vs. state roles and reinforcing the earlier theme that institutional design is critical for AI‑enabled DPI.
Speaker: Saibal Chakraborty
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved the conversation from broad optimism about AI to concrete, actionable frameworks. Early framing by the moderator positioned India’s pluralistic approach, which was then grounded by Dr. Hans’s insistence on strong DPI foundations. Robert’s warning about premature efficiency‑driven roll‑outs reinforced the centrality of inclusion and safeguards. Sangbu and Saibal linked DPI’s user‑centric evolution to AI’s potential as shared public infrastructure, introducing practical mechanisms like affordable compute and modular systems. Subsequent comments on small‑country agility and the 100‑pathway initiative provided tangible strategies for implementation. Finally, Saibal’s policy tightrope underscored the need for balanced data governance. Collectively, these insights redirected the dialogue toward a nuanced, layered roadmap—foundation, inclusion, institutional design, and scalable use‑cases—ensuring the conversation remained focused on realistic, inclusive, and responsible AI deployment in the DPI ecosystem.

Follow-up Questions
How can governments open up data sets to train AI engines while preserving sovereignty and ensuring appropriate safeguards?
Addressing data silos is crucial for AI development; finding a balance between data accessibility for innovators and national security is essential for scaling AI across countries.
Speaker: C.V. Madhukar (asked), Sangbu Kim (discussed)
What specific demand‑creation strategies can be employed to increase AI adoption in regions with good connectivity but low utilization, such as sub‑Saharan Africa?
Understanding how to translate network coverage into meaningful AI‑driven services is vital for realizing the benefits of AI in underserved markets.
Speaker: C.V. Madhukar (asked), Sangbu Kim (responded)
What are the concrete metrics and methodologies to assess inclusion, bias detection, and opacity in AI‑enabled DPI systems at scale?
Ensuring that AI does not amplify existing inequities requires robust measurement frameworks; without them, safeguards may be ineffective.
Speaker: Dr. Hans Wijayasuriya (raised concerns), Robert Opp (reinforced)
How will the UNDP’s ‘100 Pathways’ initiative identify, prioritize, and scale responsible AI use cases across different sectors?
Clarifying the selection and scaling process will help coordinate global efforts and provide a roadmap for countries to adopt responsible AI solutions.
Speaker: Robert Opp (introduced)
What is the impact and effectiveness of the AI Coach (India AI Mission) and TGDX platforms in providing affordable compute and access to government data for startups?
Evaluating these platforms will inform whether shared public AI infrastructure can truly accelerate innovation, especially for early‑stage ventures.
Speaker: Saibal Chakraborty (mentioned)
How can fund‑of‑funds mechanisms be structured to channel venture capital into socially sensitive sectors (e.g., climate, education, MSMEs) that currently receive limited investment?
Targeted financing is needed to diversify AI‑driven innovation beyond fintech and e‑commerce, addressing broader development goals.
Speaker: Saibal Chakraborty (identified gap)
What policy frameworks are needed to enable controlled sharing of sovereign government data with private innovators while protecting privacy and security?
Clear policies will facilitate data‑driven innovation without compromising national interests, a key barrier identified across multiple speakers.
Speaker: Saibal Chakraborty (policy suggestion)
How can small nations like Sri Lanka implement modular AI/DPI systems to deliver customized, citizen‑specific services efficiently?
Understanding modular approaches can guide other small or resource‑constrained countries in leveraging AI for public service delivery.
Speaker: Dr. Hans Wijayasuriya (discussed)
What lessons can be learned from the implementation of the universal DPI safeguards framework across countries, and how can its effectiveness be measured?
Assessing the framework’s impact will help refine safeguards and ensure that DPI deployments are inclusive and secure worldwide.
Speaker: Robert Opp (mentioned)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Nepal Engagement Session

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel discussed using AI language tools to make the eGram Swaraj portal, serving 250 000 gram panchayats, accessible to non-English speakers [1-3]. Alok Prem Nagar recalled a 2019 Karnataka Gram Sabha where officials could not understand the English portal, highlighting a transparency barrier [4-8]. Bhashini was introduced to translate expense pages into local languages with a single click, which he called “magic” [13-18].


A survey showed secretaries spent most of their time preparing minutes, leading to the Sabha Sar tool that auto-creates draft minutes from audio/video via Bhashini [21-23]. Odisha, Tamil Nadu and Tripura have adopted Sabha Sar, and six more languages such as Assamese and Maithili are being added [66-71]. Training programmes now let any villager view financial plans, execution status and geotagged assets on the portal [41-43].


AI upgraded the Swamitva drone survey by turning rooftop data into solar-potential estimates for 2.38 lakh panchayats, linked to the PM Surigarh Yojana portal [30-33]. Uttar Pradesh onboarded 59 000 panchayats in 40 days, showing that simple mobile tools can bypass infrastructure hurdles [107-110][133-135]. Open-architecture, API designs and Indian data residency were highlighted to prevent vendor lock-in [165-186].


Next steps involve Bhashini integration with spatial plans, the Pancham chatbot and image-based issue routing [259-275]. Many meetings remain untranslated due to unsupported dialects, prompting states to train custom language models [68-70]. The panel concluded that multilingual AI on a public digital stack can greatly improve transparency, accountability and participatory governance in rural India [225-230].


Keypoints


Major discussion points


Language AI (Bhashini) makes digital governance understandable for rural users.


The portal eGram Swaraj was originally English-only, limiting villagers’ ability to see plans, expenses and minutes [3-4][7-10]. Bhashini enabled on-demand translation of expense pages and meeting minutes into local languages, turning “magic” for panchayat members [13-18][21-23][37-40][66-71].


Sabha Saar – an AI-driven voice-to-text summarisation tool – cuts the secretarial burden and improves transparency.


By uploading an audio/video recording, Panchayat secretaries receive a draft minute that can be edited and published, addressing the “pain point” that 65 % of secretaries spent most of their time on meeting documentation [22-23][50-64][66-71].


AI is being layered onto existing schemes to create new services (e.g., Swamitva, solar-potential mapping, meteorological forecasts, spatial development plans).


Drone-captured data from Swamitva were repurposed to estimate rooftop solar potential, linked to the PM Surigarh Yojana portal [24-33]; weather forecasts are now pushed to each Gram Panchayat via Bhashini [138-148]; spatial development plans and visualisations are being generated for highway-adjacent villages [259-267].


Scaling and implementation challenges are being tackled through rapid onboarding, capacity-building, and expanding language coverage.


Uttar Pradesh onboarded 59 000 Gram Panchayats to eGram Swaraj in 40 days, demonstrating that a well-designed product can overcome registration, digital-signature and payment-process hurdles [106-115]; additional languages (Assamese, Boro, Maithili, etc.) are being added to Bhashini to reach speakers previously excluded [66-71].


Open architecture, interoperability and data sovereignty are seen as essential for long-term sustainability and future AI expansion.


The speakers stress the need for API-based, modular systems that can integrate new AI use-cases (agentic, generative, computer-vision) while keeping data residency within India and avoiding vendor lock-in [152-186][180-186].


Overall purpose / goal of the discussion


The conversation was a fireside-chat aimed at showcasing how language-enabled AI, particularly the Bhashini platform and the Sabha Saar tool, is transforming Panchayati Raj institutions. Participants highlighted concrete benefits (greater transparency, citizen participation, efficient financial tracking), shared lessons from large-scale roll-outs, identified operational hurdles, and outlined a roadmap for deeper integration of AI across rural governance services.


Overall tone and its evolution


– The dialogue opened with enthusiastic and demonstrative remarks about the breakthrough that Bhashini provided [13-18].


– It shifted to a practical, solution-focused tone when describing the mechanics of Sabha Saar and its impact on secretarial workload [22-23][50-64].


– As the discussion progressed, a reflective and candid tone emerged around challenges of onboarding, language gaps, and capacity-building [106-115][66-71].


– The latter part adopted an forward-looking, optimistic stance, emphasizing open-architecture principles, future AI use-cases, and the vision of AI as a public-stack enabler of participatory governance [152-186][259-267].


Overall, the conversation moved from celebration of early wins, through honest appraisal of obstacles, to a confident outlook on scaling AI-driven governance at the national level.


Speakers


Shri Alok Prem Nagar – Senior official, Ministry of Panchayati Raj (MOPR), Government of India; expertise in rural governance, digital transformation, AI-enabled public services [S6].


Shri Amit Kumar – Senior official, Ministry of Panchayati Raj (MOPR) (AI implementation lead); expertise in AI applications for governance, policy design, and digital inclusion [S1].


Moderator – Session moderator; expertise in facilitation and discussion management.


Additional speakers:


Ms. Deepika – No role or title mentioned in the transcript.


Full session reportComprehensive analysis and detailed insights

1. Context & problem – The eGram Swaraj portal, the single-window system used by all 2.5 lakh (250,000) gram panchayats for planning, execution and payment, was built only in English, limiting its usefulness for rural officials and citizens alike [1-3]. Shri Alok recounted a 2019 Karnataka Gram Sabha where, despite being honoured on stage, he could not follow the proceedings because they were presented in English [4-8]. This episode highlighted the fundamental language barrier that prevented people from engaging with public money.


2. Bhashini introduction – The government’s AI-powered translation engine, Bhashini, was described as a “revelation”. With a single click a panchayat member can view the expenses page in his own language, an effect Alok called “magic” [13-18]. Bhashini also enabled Alok to draft letters to state governments in their local languages, including a Telugu letter [70-73].


3. Sabha Saar development – A rapid-assessment survey of roughly 8,000 panchayat secretaries showed that 65 % of their workload was spent on minute-taking, prompting the creation of a voice-to-text service that generates a draft minute from an audio or video recording, which can then be edited and uploaded [21-23][60-64]. The tool was launched on 14 August 2025 [31-33]; by 4 Feb 2021 more than 1,15,115 Gram Sabha meetings had been recorded [41-44]. It has been adopted in Odisha, Tamil Nadu and Tripura, and the language catalogue is being expanded to include Assamese, Boro, Maithili, Santali and eleven other regional languages [66-71]. The design keeps the draft editable before publication, a “human-in-the-loop” safeguard emphasized by both speakers [60-64][165-168]. Alok noted that Sabha Saar is not bundled with the recording device, allowing it to bypass village connectivity problems [61-64].


4. Additional AI-enabled services


Swamitva solar-potential: AI repurposed dense drone-generated point-cloud data to estimate rooftop solar potential for 2.38 lakh gram panchayats; the results are displayed on the Gram Manchitra map and linked to the PM Surigarh Yojana portal, enabling local renewable-energy campaigns [24-33].


Meteorological forecasts: Daily weather forecasts are pushed to every gram panchayat via Bhashini, giving villagers access on their phones [146-148].


Spatial-development plans: Architecture-college pilots created visualised development plans for highway-adjacent villages; the approach was later adopted statewide in Andhra Pradesh [259-267].


5. Scalability & implementation challenges


Uttar Pradesh onboarding: 59 000 gram panchayats were migrated to eGram Swaraj within 40 days, a feat described as “impossible” until a user-centred product was delivered [107-115].


Simplicity of tools: Recordings can be made on any mobile phone, a device already owned by most villagers, which was highlighted as a key factor in rapid adoption [61-64][82-84].


Capacity-building: A programme that began the previous year, described as an “incredible” journey, equips villagers to drill into financial dashboards, view execution status and see geotagged assets on the portal [41-43][40-41].


Remaining gaps: Many dialects remain unsupported, connectivity in remote areas is uneven, and continuous training is required to overcome initial resistance [66-71][85-89][106-115].


6. Open architecture & interoperability – Both speakers stressed the importance of an open, API-based design. Shri Amit highlighted modular, interoperable standards that keep data residency within India while allowing models and infrastructure to be swapped if geopolitical risks arise [165-168][180-186]. Shri Alok added that, while he is not in a position to extend the eGram Swaraj model wholesale to other ministries, he welcomes integration with existing robust systems such as the “Meri Panchayat” mobile app and Common Service Centres (e.g., “Bapuji Seva Kendra”) [138-144].


7. Future integrations & vision – The panel envisaged AI-driven image analysis that automatically classifies citizen-reported photos of potholes or overflowing drains and routes them to the responsible department, a capability already piloted in Guwahati [208-211]. The “Pancham” WhatsApp-based chatbot, which enables two-way communication with sarpanches and secretaries, is being expanded to deliver AI-generated audio-video messages and rapid updates [273-275]. The Department of Drinking Water and Sanitation has expressed interest in applying Bhashini to Village Water Committee meetings, signalling cross-sectoral diffusion of the technology [98-100]. Spatial-development visualisations and solar-potential dashboards are slated for deeper integration, reinforcing the vision of AI as a multi-modal service layer for rural governance.


8. Impact on transparency & participation – The portal now lets any user drill into a gram panchayat’s record and see, for each financial year, the plan, execution amount, related bills, payment status and geotagged assets [40-41]. Structured documentation through Sabha Saar has been reported to change the culture of panchayat functioning, fostering greater openness, better monitoring of project implementation and a shift toward accountability [93-95][21-23]. Both speakers reiterated that AI-generated outputs must remain editable and subject to human review to preserve credibility [60-64][165-168].


9. Conclusion – The discussion demonstrated strong consensus that multilingual AI, delivered through low-cost mobile-first tools, can bridge the language divide, streamline administrative workflows and enhance citizen participation in Gram Sabhas. Success hinges on continued capacity-building, robust human-in-the-loop safeguards and open, modular architectures. While the degree of automation for meeting summarisation and the extent to which the Panchayati-Raj model should be replicated across other ministries remain points of divergence, both speakers expressed optimism that AI built on a public digital stack, anchored in language inclusion and sovereign infrastructure, is poised to become a powerful enabler of participatory governance in 21st-century India, provided the identified challenges of language coverage, connectivity, training and governance frameworks are addressed in the next phase of implementation.


Session transcriptComplete transcript of the session
Shri Alok Prem Nagar

All panchayats, all two and a half lakh of them, they are present on eGram Swaraj. For right from planning to the payment stage, everything is done on a portal which is called eGram Swaraj. This portal works in the English language. So I’ll tell you, in 2019 when we were starting something called the People’s Plan Campaign, I happened to attend a Gram Sabha in the state of Karnataka. I was there for something like 45 minutes and I was felicitated and sat on stage. And I didn’t understand a thing. And then it struck me, you know, I had this thing that how do you expect these people really to relate to what is happening? Because it is public money.

Everybody in the panchayat needs money. It needs to know what kind of plans are uploaded, how many works got done that were as per the plans, how much did it cost. It costs them to do it. And subsequently, they can raise issues in the meetings pertaining to the works close to their residences. And along came Bhashini. I think we had in the year 2023 an event called Manthan, where we invited a lot of people from the industry to tell us how we could conduct our business better. And so Bhashini was a revelation. And imagine that a person from a panchayat is looking at the expenses page for his gram panchayat or her gram panchayat. And then by a click of a button, they’re able to see it in their own language.

It was magic. And that was the starting point. Yeah. And subsequently, of course. We went from there and. We found out through a survey that what really hurts a panchayat secretary is not to be able to produce the minutes of meeting in time, which are very important, which are the only record of a panchayat’s proceedings. And then, again, using Bhashini and another tool, we were able to create Sabha Sar, in which if you input the video slash audio recording of your meeting, you are able to get a minuted draft, which you can then edit and upload. So that was miracle number two. And briefly, if I could also address Swamitva, the scheme that you mentioned.

Swamitva is a scheme where drone surveys are carried out over all the village habitations. So there are these pictures that are subsequent. Subsequently converted to ortho rectified images and they lead to property rights. for the people living inside those villages. But the way the images have been captured, there is dense point cloud information, all of which was getting wasted. Why? Because we were confining our attention only to the orthorectified images. So we had the AI guys look at that, and then they converted all those rooftops that they could see into the solarization potential. As a result of which now, out of the 3 .3 lakh gram panchayats where drone surveys have been carried out, in 2 .38 lakh gram panchayats, you can go to gram Manchitra, and you can zoom into your village, and then you can click the icon corresponding to the solar ability potential, and it will tell you roof -wise how many, panels can you fit there.

We’ve gone further. and we’ve integrated that with the PM Surigarh Yojana portal. As a result of which, the Gram Panchayat can drive it like a campaign and lead to greater rewards for everybody all around.

Moderator

Actually, it reaches the last mile citizen when you talk about those benefits. So India’s last mile operates in local languages and dialects, as you mentioned, solving that problem. So in your view, how critical is language AI in ensuring that digital governance platforms are inclusive and participatory and increases citizen trust and participation in Gram Sabhas?

Shri Alok Prem Nagar

Like I said, people are now able to follow what was something that was written in. They could still see it, of course. In the English language, then they’d have to go to the person who they knew to be very smart in the village and they’d have this person read it out to them. Now they can see it at their leisure. not just people here but people outside who are working in Mumbai can see what is happening in their panchayats and close to Pune or something and immediately they can get active about it so and the miniaturization tool that I mentioned that opens a whole new set of avenues now you can have a record then against that you can have action taken reports and then you could have follow up in the next meeting it makes it all amenable to very systematic representation on portals so that is what some of the states have already started doing and it is truly remarkable that anybody can go in there and when I say anybody I don’t mean just the panchayat secretaries anybody in a village can drill into their gram panchayats record and see that corresponding to the finance commission grants for any year what was the plan against which how much has been executed, how many bills were prepared against each activity and what is the status of the payment, whether it has been completed, where the asset exists, the geotags and then you can zoom in and maybe see it on Gramman Chitra.

So there are great rewards for everybody all around and we need to of course now intensify it through a capacity building training program. That is something we started doing from the previous year, but it has been an incredible journey. And it is being adopted all over, yeah.

Moderator

So Alokji, let’s talk a bit about Sabha Saar Impact. Let’s let our audience know about it. And with its launch on 14th August 2025, MOPR introduced an AI -enabled voice -to -text meeting summarization tool powered by Bhashini ASR Services. So as of 4th February 2021, over 1 ,15 ,115 Gram Sabha meetings have been held. process. So this is a good number I need a round of applause. So what structural changes have you observed in the panchayat functioning after Sabha Saar?

Shri Alok Prem Nagar

Sabha Saar was one thing that we carried out for the convenience of the panchayats and the panchayat secretaries as opposed to E. Gram Swaraj which was our selfish motive. We wanted panchayats to plan there and show all their vouchers there so that we could tell that this is how the money has been spent. But Sabha Saar actually came through as a part of a survey that was carried out using Rapid Pro by UNICEF. We asked something like 8 ,000 panchayat secretaries all over the country that how do you spend your time? How much of it is spent in inspections and attending programs? And meetings and making records? So one thing that came through was the conduct and recording of meetings was the in 65 percent of the respondents.

That was the activity that was sitting, you know, very heavy on their entire time availability. And so having realized this and having the help of Bhashini, we converted it into a tool. So in Bhashini, it’s very simple. There is no big standard operating procedure, as it were. So if you’re standing having a meeting, there has to be a recording device. It could well be your mobile phone. And then through audio or video recording, you can just place it. Each time somebody speaks and later on, you input this into a into the sub -assert tool. the sabasa tool is not something that is a part of the device on which you carried out your recording so the issue related to connectivity in villages is something that we have been able to sidestep and once you do that it gives you a draft minute of meeting so bhashini turns it into English and the English thing is monetized using the AI engine again bhashini gives it back to them in their own language and yes it’s voila the person can just make a few changes and upload it and we’ve had some heartfelt gratitude coming to us from villages as a result of this

Moderator

ok so has the structured documentation improved transparency participation tracking or monitoring of meeting frequency and agenda quality too

Shri Alok Prem Nagar

now that the minute is ready if there are 5 items, 10 items ok So the states that have really gone ahead and adopted it, which is Odisha, which is Tamil Nadu, which is Tripura, all these people are into the second stages now where they are looking at the minutes of meeting and converting it into or refining it into tools that help them keep track of the activities after they’ve been created. We also realized through our meetings that why is the number just 1 ,15 ,000? So there are a whole lot of people whose languages do not exist on Bhashani. So from there, we asked those states to provide Bhashani with the necessary expertise so that they can train their bots.

And they’re already working on something like 11 more languages, which includes Assamese and Boro and Maithili and Santal and whatnot. Yes. So those languages are also. So it’s been. So. a very gratifying experience and then the learning continues.

Moderator

Yeah, it’s commendable that things have reached to that level. So over to you, Amitji, from an accountability lens, does structured documentation change behavior with the governance systems?

Shri Amit Kumar

Thank you. So I think, you know, so if you have understood the enormity of the situation, right, what we are talking about, 250 ,000 plus gram panchayats and different kind of languages. So just to circle back, if you look at the frugality of the situation, right, so for example, if you look at, in India, generally people talk about either we live in a bullock cart stage, right, or we are aspiring for bullet train, right. So the point is, if AI has to tell us in terms of, you know, how we learn in the future, how will we transform, so we cannot, I mean, leave out 900 plus million people who are living in villages, right. Absolutely. So the idea is not to make it very, very urbanized, you know, very, very kind of elitist idea that, you know, that.

That AI is only for urban, AI is only for industries, AI is only for commercial sector. So, obviously, this is a journey, right? So, you have to start somewhere. So, for example, I mean, the frugality what I was talking about, that we did not ask Gram Panchayat to invest anything, right? All they need to have a mobile phone, which any which way they have, right? And the idea is just to kind of record and upload. Obviously, there will be some challenges and kind of resistance also in the beginning. But, you know, once they get used to it, so, for example, today we are asking them to kind of upload your recording, right? The rest is done by system.

And system also has a provision of human in the loop so that we can go and correct it. Now, tomorrow, we see the next step what we will be doing, what we can do perhaps, right? When the next meeting happens, we can also populate the agenda from last meeting, right? So, what was discussed last time, what was committed, whether you are doing or not doing, right? And then everything goes to kind of public domain. so generally the people who live in city they know that when there is a RWA meeting nobody goes and attend but they all warfare in the whatsapp group in the village also it’s not easy to bring people but once they start getting the hang of it that okay there is a meeting I am getting the mom and it’s available in the public domain we are using AI, AI is for good AI can also be leverage for rural sector why it has to be very very elitist only for passport save so that’s just a beginning it’s just a journey and also if you see from idea point of view phenomenal idea for ministry of panchayati let me congratulate sir and the entire team to think of something like that because AI is all about idea and use case if you have the right idea you can do wonders but you have to have idea and muscles to execute it so that way I believe that this whole documentation will do wonders for them.

Gram Panchayats will also realize something which was missing in the most part of the world that you know the record keeping accountability, transparency so and so forth because generally these decisions were taken by some people only and executed by some and the large population was largely kept out of it knowingly or unknowingly right. So I think that’s what I said that you know it will change the way they work, it will change the way they think because this is only for a you know kind of we are starting only with a let’s say meeting but now they will start thinking and there will be demand from states and otherwise right what more can be done with AI.

So broader scales would be achieved. Yeah Sabasar is an example like Praman we are doing we have launched this Pancham you know bought also for all elected and selected representatives so I think it’s a great you know kind of experience efficiency would obviously help them adopt it. I mean let me tell you in our own corporate meetings we are still some of us making note. despite being on teams despite using co -pilots despite having all tools at our disposal but we are still using it we expect a junior guy to take notes and circle back so that’s a cultural change which you have to also see and these changes and these changes couldn’t have been possible if we wouldn’t have the infrastructure like Bhasni because ministry on its own how ministry got benefited we have infrastructure like Bhasni we have the GPUs got available to us through the NDIA mission otherwise procurement itself could have been a big challenge we have a team to kind of build applications so I think it takes a village to move something so that’s what has happened here

Shri Alok Prem Nagar

thank you for sharing your thoughts just continuing with that the department of drinking water and sanitation has actually approached us that the meetings of their village meeting VWC’s village water committees. They want to use Bhashini for that and there has been some initial interaction between the two teams.

Moderator

That’s commendable, I would say. That’s awesome. So Alokji, let’s talk of some implementation challenges in rural India with AI. AI in rural governance is transformating, but complex. So what are the biggest operational challenges, infrastructure, though a bit, I think Amitji was about to share that, but then infrastructure, training, dialect diversivity and connectivity. So what challenges are you facing? How receptive are panchayat functionalities and rural citizens to AI -enabled systems?

Shri Alok Prem Nagar

Challenges, of course, there are many and you would have anybody tell you. What we have found out, the adoption of e -gram swaraj by our villages gram panchayat and then we have A case in point, Uttar Pradesh has got something like 59 ,000 Gram Panchayats. And for Uttar Pradesh to onboard eGram Swaraj seemed like an impossible task because it involved registering your digital signing certificates and then everybody agreeing to completely dispense with checkbooks. All your payments were then going to be, can you imagine Uttar Pradesh did it in 40 days flat. All 59 ,000 Gram Panchayats. So my point was that if you are ready with a product that addresses their needs and it is friendly and it meets, of course, my need was that I needed the money well accounted for and their need.

It was a system that could make it very easy for them to do it. So we met halfway and if UP can do it in 59 ,000, I am not prepared to hear an excuse from any other state in the country. It’s a trial by fire. Likewise for Sabasar, Sabasar again I said initially that there was a demand that was indicated from the state. So when we set out to meet that, we were clear what is it that we are looking for and people were so forthcoming. In fact, Bhashani also enabled me to write letters to the states in their languages and people were gushing with affection and what not. I got a letter in Telugu for the first time and all that.

So there are challenges but then the Ram Panchayats are predisposed to meet you halfway. So you need to begin that journey and we have seen that with regard to a number of things. There have been campaigns. Every year they carry out a campaign from 2nd October to… the 31st of December, which extends to January typically, where all two and a half lakh gram panchayats prepared their gram panchayat development plans and uploaded on the portal. So 2 .5, 250 ,000 gram panchayats, all of them planning for the next year. And so before you enter the next financial year, their plans are ready. I mean, we don’t do it in the departments, in the ministries. And all these gram panchayats have not done it once, twice.

They started in 2018. They’ve continued to do it ever since. In the COVID year, there was a request that we don’t do this campaign. So there was a massive pushback from the states that, no, we want to do it. The inertia was so great that they still did it. So there are challenges. But if we make an application like he was saying, that this is a simple recording device, this is a mobile phone, there aren’t things that you need to procure to set it up. So if you make a simple tool, people would grab it with both hands. So I think that is the embracing of challenges rather with the response we are getting with Bhashani.

Moderator

So for ministries delivering last mile services such as Ministry of Rural Development and the Ministry of Agriculture and Farmers Welfare, what lessons from MOPR’s AI journey would you share? How important is open architecture and interoperability in your sense?

Shri Alok Prem Nagar

That is dangerous territory. I am not in a position where I could start advising anybody because they have got pretty robust systems of their own. If you look at Manrega Soft and the PM Avas Yojana, because they are running schemes which are very pointed. Avas Yojana is just about houses. Manrega is a scheme where there is of course it is as large as the things that you do in the Finance Commission. It is a very big scheme. It is a very big scheme. but it is fairly well organized and in all of these typically the beneficiary is the individual. In Panchayati Raj mode there are individuals at the end of it but our emphasis is on the institution, the Panchayat and not just E.

Gram Swaraj and the things that we do for their accounting and planning. We also hooked up with the meteorological department and there are daily forecasts being generated for every Gram Panchayat. This people are able to see on their phones and all with the similar ability as they are able to see everything using Bhashini. So it’s a great enablement all around and it can only get better.

Moderator

Absolutely. So Amitji over to you. How critical is open architecture ensuring long term sustainability? And avoiding vendor lock -in.

Shri Amit Kumar

if I can take a minute and talk about the previous question please go ahead sir rightly mentioned that different ministries have got a different mandate it’s not an apple to apple comparison but see you also have to see the panchayati raj the main role of panchayati raj what I understand is the mobilization because they are not running major schemes on their own compared to others and generally the best practices doesn’t have to be in form of technology or architecture only the idea is that if you go down from top there are two different ministries and if you go to the village you will see the same infrastructure, same set of people are only working from both departments right so the idea is if one can do others can also do So there is a lot of learning in terms of method that how we could overcome, how could we mobilize, how we could implement some of these solutions.

And I’m sure we know that RD and agriculture are also doing a lot of things, but their mandate is much bigger. But they can also, you know, take a lot of pride or kind of learning from the success which we have, right? What was the second question? The second one was that how critical is open architecture in ensuring long -term sustainability and avoiding the window lock? So you must be hearing this word called sovereignty quite a lot, right, nowadays. So the whole idea of, you know, being sovereign in any part of the, you know, technology, be it defense, be it IT, be it any way, it’s a survivability, right? So the idea is despite, in spite any kind of, you know, geopolitical risk, we should survive.

Yeah. Our system should run, right? So for that, generally, people confuse sovereignty with also making India local, et cetera. So that’s not the case, right? We will always have. some technology from outside. But we have to design in a way that it is kind of ready to shift, right? So either from a technology point of view, we have the interoperability, the standards which we have chosen, the models which we have chosen, the infrastructure which we can move around and the teams which can control, right? So the data residency has to be within India and data is with us. So obviously if we have trained on one, we can train on something else also. So the idea is also to look little bit long term.

See, what has happened that when we started, obviously there were a lot of POCs. Nobody knew, right, how AI will behave. Still we don’t know. Still we don’t know, right? I mean, so obviously that you have to start somewhere, right? And then you have to also ensure that in future, when we start with one use case, it becomes easy, right? When the department itself becomes fully AI enabled and we have 10 AI use cases running, then it becomes a problem, right? Problem of management. So that’s where I think we need to plan better for future. so that we plan. I mean, it’s not that a use case is defined. Then we found an easy method of procurement of infra or the model which I knew.

So going forward, I think there will be a platform approach. So where we have to think for future also that, okay, these AI cases are likely to come in future as well. Different kind of AI, be it agentic, be it gen -AI, be it conversational, be it computer vision analytics. And accordingly, we have to have open architecture like the way we did in a normal digital transformation. Even digital transformation, there used to be time where we created our own independent monolith applications. But now we are creating applications, you know, which are more API -based, can integrate with anybody, right? And futuristic, can scale our modular. So same concepts have to be used for AI initiatives as well.

Moderator

Well said. So I think adoption comes with responsibility and that’s what you are scaling at, looking at the future. So Alokji, Sabha sir demonstrates how language AI can power grassroots governance. After Sabasa’s success, what deeper integrations do you envision with Bhashini and what does the next phase of collaboration looks like? Let’s talk about that.

Shri Amit Kumar

And we would like through, and people are going to be speaking in any number of languages. I think the next step, my government is something that has already been very, always been very invested in providing services to making ease of living easier, as it were, and providing all manner of things. Everything is finally a service. You need to look at a doctor. You need your road fixed. You need a street light to be working. You want the log water to be drained or something. She needs more attention than us. Yeah. Okay. Over to you.

Shri Alok Prem Nagar

So people should come to expect. they should demand these services from their gram panchayats. There are mechanisms of doing that because gram panchayats don’t have a lot of resources in terms of manpower, in terms of people who are at their beck and call to carry out the activities that are flowing from the charter. So there are systems in a lot of these villages. You have common service centers in some states. They have their own system of common service centers like UP, like Bapuji Seva Kendra in Karnataka, like Mi Seva. So we need to take that further and we need people to be able to talk and find out if a certain service that is available to them, can they avail it in their village?

If they are to do that, what is the mechanism? And if they’ve already made an application, that what should be able to tell them that where that thing currently stands? so that is a very wide area like I said that there are a number of services we also learnt of a pilot that was carried out in Guwahati where the bus used to have a camera it used to drive through capture all number of images and basis that it would assign issue labels to them as it were if there is a drain overflowing so it takes note of that if there is a pothole then it takes note of that and then it assigns it to all these agencies whose job it becomes now to fix that so not that but maybe we have a mobile interface called Meri Panchayat which ports a lot of information from E Gram Swaraj Meri Panchayat also has the capability of capturing images of the issue that is being reported I think the next step is that image it makes sense of the image and it assigns it to the necessary department.

There are people who are mapped whose job it is to carry it out and within a certain amount of time it doesn’t happen, then there is escalation. We need to go deeper into that system. That, I think, is the next frontier. And, of course, because it involves vocalization of your demands, so bhasini is absolutely critical in this. So when we say there is a long way to go, I think that phrase is no more relevant. It’s a short way, but not even a big journey, an intelligent journey to move ahead.

Moderator

So India is building public digital infrastructure for AI at scale. So how do we balance scale with accountability and public trust? We have talked much about how we are building things. But let’s talk about the other side. And can India lead the world in population scale? Of course it can. I am sure about that. But then multilingual AI for governance, when it comes, if you would like to have a shot at it first,

Shri Amit Kumar

so one thing you all have to realize that whatever we do is a population scale and unparallel right because of our size so even our POCs exceed the kind of performance of European countries our UP sir talked about UP 60 ,000 panchayats if you look at UP maybe it will be in top 10 country right sir in terms of population and size I think the world is vouching for us when it comes to the use cases yeah see if you look at that we have got that scale now we have the experience behind us right we did Aadhaar, we did UPI we did Fastag, we did GST and we did Income Tax so now we have that confidence behind us that we can do anything of scale and with the same Prugal approach we will do 10 times cheaper than Western world and certainly not worse better only right so in terms of that and also from last decade we have evolved right so for example the concept of privacy like dpdp act consent based usage like you know adhar brought so a lot of things have improved from a policy side of it now now once you have policies in place systems are easy because system themselves act as a rule you know once you have policies in place then you don’t need so much of human intervention or discretion so since we have done it since we have kind of you know done so much so now if you look at the very simple case bhashni i remember four or five years back and i and amitabh used to i mean kind of debate also whether we need a bhashni okay right because we we had some of the google translate services so on for forth right but the idea is that i mean in the hindsight that was the right call right in future we have to have something called sovereignty word right we have we don’t have to dependent I mean we need to be frugal and we don’t want to use you know the applications which are very expensive from a taxpayer money point of view so similar things we have done a lot right so I think the next step for example if you look at roam around in AI summit you will see how many LLMs and SLMs we are building on our own right honorable ministers talked about five layers application I think we have ample talent to build applications LLMs we use open LLM but we are developing our own and Bosnia also like one of the common infrastructure energy will take care right infra and chips anyway will have dependency but that’s the rest of the world also has a dependency right not that everybody has a rare earth and everybody is building chips so that way I believe that you know that and because we have that technical know how also I mean that’s our kind of bread and bread and butter now a day right so we’ll be able to take the learnings from all these systems and we’ll move forward as of now we were a bit slow in last year or two because AI itself was new for everyone so we took some time but now I think from this year onwards we’ll really scale it up because we have tested the blood, we have seen the success and we will scale it up

Moderator

sure, thank you for sharing that so as we come towards the closure of this conversation I would like to leave with one final thought which is like if Panchayati Raj institutions are the foundation of democracy can AI when built on a public stack and powered by language inclusion become the strongest enabler of participatory governance in 21st century just closing thoughts from you both Alokji, would you

Shri Alok Prem Nagar

absolutely he was just telling you that that we’ve been able to do things at scale this thing about UP that I told you I wear it like a badge that to have done it in some place so and it’s not an easy ask because there are so many stakeholders they’ve got various kinds of issues of their own you’ve got to engage with them address those things and if my problem is well defined and if I know what kind of a thing is going to help me redress that like Bhashini did for us I think that what you said is going to come true because that is so being able to understand my problem and knowing what parts of the problem can be fixed in what manner using the various tools that are available that is the key and it’s not an over simplification but good servant bad master so that is something that stays and it is not going to land you in the right places if you just let it go around like an animal.

But then if you know where to put it, what modules to be inserted, what has been used in the background, and so that would make you more confident. I’m not really an AI person, so I’m just speaking on the strength of what I’ve learned and the experience thus far has been outstanding, partly because we’ve had a very good partner. But other than that, I am not throwing it all open out to AI. I don’t wear T -shirts saying I love AI or something, but I have a problem and it needs fixing and I need to be able to know what aspects of AI can help me fix that in the best possible manner. And that’s my thing on this.

Shri Amit Kumar

Yeah. So like like sir said, you know, sir is not a person. Neither am I. So if you look at, you know, that he was transparent enough to share that. No, no. So look at that way that none of us were right. Because if you’re talking about AY, I’ve been doing this, you know, digital transformation for public sector for over 20 years. Obviously, there was no AI even when there was no DPI, DPG also, you know, what we kind of retrofitted with the names. Right. So if you look at the idea of Panchayati Raj itself is a participative governance. Right. That people have to assemble in the Gram Sabha and decide on the money which they are getting, how to spend and prioritize.

Absolutely. And if AI tools like Pramana and Sabha Sar and Pancham can help that strengthen, what best you know you can expect from from a participative government, from a democratization point of view. So I think this sometimes, you know, that technology becomes secondary. And in my view, most of the time, right, the ideas have to be clear in terms of what you want to achieve. and what problem you want to solve, what scale you want to solve, what are the guardrails you have to kind of, you know, also put in place. So, for example, when we do AI, that it cannot be 100 % autonomous, right? Of course. And it cannot be 100 % human in the loop also.

Because if we have each and every transaction being, you know, approved by human in the loop, then it defeats the purpose of AI. And there is no AI, right? Then we are still living in the rule -based algorithms. Algorithms. So the idea of, you know, that AI will be that we also train, monitor, have the mechanism to take complaints, have the mechanism to perfectly, you know, kind of train it better so that we improve our accuracy. So that is how AI journey. So AI journey is slightly different from the previous digital transformation journey, which were more like a transactional systems, right? So that way, I think, if you look at currently also Sabasar, I think whatever I am hearing from people, market teams also, So it is giving great accuracy, right, in terms of translation and summarization.

And I’m sure whatever there are little bit areas to improve, it will improve on its own. So we cannot stop it, right? So once we have boarded a flight, then we can only get down at where we have to, right? So I think future is bright. And also from a MOPR experience point of view, it will also, I’m sure, energize and motivate a lot many others. I can say with my experience that if MOPR can do in rural, we can use AI tools. There is no stopping for us as a nation.

Moderator

Exactly. This is truly an achievement when it comes to MOPI with the government. So you want to say anything regarding this, Alokji?

Shri Alok Prem Nagar

I thought of another application that works. That is something we’ve been working on, which was spatial development plans. Okay. we again engaged with a lot of panchayats that were close to the highways okay so typically if a panchayat is on a national highway close to a big city and have a population of 10 000 plus then you were eligible to participate in this program okay so there were 34 gps that we involved and we got the planning and architecture colleges to prepare spatial plans for them spatial plan would be futuristic it would zone and it would you know assign it would look into the future and see how this place was going to grow it would devise road networks or something and tell people what they would become over a period of time we had a conference with with gram panchayats around bhopal building and the people were so annoyed We don’t need a spatial plan.

Over a period of time, of course, we told them what it was going to be, but we had this epiphany that people need to be able to see what the spatial plan will help them become. And then we went into the next national conference. We had for each of these 34 spatial development plans a visualization. And we showed people that if you want to become this, you have to do this. And then there was greater enthusiasm. So the people on whom this plan is, who are going to be subjected to this plan, if I could use those words. So these people, if they’re not on board, there is no way you can carry it out. And that, I think, is wide open.

And we’ve had after that. the entire state of Andhra Pradesh has gone ahead and said that all their planning is going to be spatial plans. So that is something that is amenable to AI tools. And a final thing that I remembered that lots of times we need to convey through audio video messages. He mentioned Pancham. So Pancham is a WhatsApp -based chatbot platform which allows us to have two -way conversation with all the sarpanchas and panchayat secretaries in the country. So all these people. And so if there is messaging that needs to be conveyed, if there are videos that need to be quickly created using AI tools, that is something that would be hugely effective in getting the message across in the quickest possible way.

Moderator

Thank you. Thank you so much for such endeavor. insights on the Gram Panchayat and how things are working behind. Actually, I’m sure the audience was truly, they were unknown about what’s happening around and this conversation has given a new tangent to how we look at the rural development. Thank you so much Shri Alok and thank you so much Shri Amir for sharing these thoughts on Gram Panchayat development. Thank you so much for this fireside chat. Thank you. I would like to call Ms. Deepika to please felicitate Mr. Alok.

Related ResourcesKnowledge base sources related to the discussion topics (27)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“The eGram Swaraj portal is used by all 2.5 lakh (250,000) gram panchayats and was initially built only in English, creating a language barrier for rural officials and citizens.”

The knowledge base states that eGram Swaraj encompasses all 250,000 gram panchayats and originally operated solely in English, which created significant participation barriers [S1] and [S2] and [S7].

Confirmedhigh

“Bhashini, the government’s AI‑powered translation engine, allows a panchayat member to view portal pages in his own language with a single click, effectively translating directly between Indian languages.”

S1 mentions the integration of Bhashini with eGram Swaraj for translation, and S78 describes Bhashini’s capability to translate directly between Indian languages without using English as an intermediate, confirming the single‑click multilingual access claim.

Additional Contextmedium

“In 2019, during a Karnataka Gram Sabha, officials could not follow proceedings because they were presented in English, illustrating the language barrier.”

S2 references a 2019 event when a programme was being started and notes that the eGram Swaraj portal worked only in English, providing contextual support that language barriers were evident at that time, though it does not specify Karnataka or the exact incident.

External Sources (79)
S1
Transforming Rural Governance Through AI: India’s Journey Towards Inclusive Digital Democracy — – Amit Kumar- Moderator – Shri Alok Prem Nagar- Amit Kumar
S2
Nepal Engagement Session — – Shri Amit Kumar- Moderator – Shri Amit Kumar- Shri Alok Prem Nagar
S3
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S4
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S5
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S6
Transforming Rural Governance Through AI: India’s Journey Towards Inclusive Digital Democracy — -Shri Alok Prem Nagar: Senior official from the Ministry of Panchayati Raj (MOPR), Government of India. He discusses the…
S7
https://dig.watch/event/india-ai-impact-summit-2026/nepal-engagement-session — All panchayats, all two and a half lakh of them, they are present on eGram Swaraj. For right from planning to the paymen…
S8
Re-envisioning DCAD for the Future — Although voice-to-text technology has improved, it still requires human resources to ensure accuracy, particularly when …
S9
UN OEWG 2021-2025 9th substantive session — The concept of the Needs-Based ICT Security Capacity Building Catalogue was not extensively discussed in most of the ses…
S10
https://dig.watch/event/india-ai-impact-summit-2026/the-future-of-public-safety-ai-powered-citizen-centric-policing-in-india — And that, I think, is wide open. And we’ve had after that. But the entire state of Andhra Pradesh has gone ahead and sai…
S11
WS #100 Integrating the Global South in Global AI Governance — Fadi Salim: Great question as well. Maybe on the data point first, and then I’ll come to the second part of your ques…
S12
Day 0 Event #192 Leveraging the Namaa Platform and AI to Promote Sustainability — 1. Drone technology for agricultural land inventory: Conducts accurate field surveys of large areas at high speed, impro…
S13
India harnesses AI for advanced weather forecasting amid climate challenges — India is leveraging AI to enhance its weather forecasting capabilities in response to the escalating challenges posed by…
S14
AI for Social Good Using Technology to Create Real-World Impact — Oh, totally. I think you talked about what you’re doing at ISE. I think there are many initiatives in India which essent…
S15
AI, Data Governance, and Innovation for Development — Sade Dada: So, you know, getting to these areas is really, really complicated, very, very challenging, and it’s because …
S16
Trump and tech: After 100 days — Continuity, institutions, and political cycles Participants generally agreed that despite shifts in leadership or politi…
S17
Overview of AI policy in 10 jurisdictions — Summary: Brazil is working on its first AI regulation, with Bill No. 2338/2023 under review as of December 2024. Inspire…
S18
AI Meets Agriculture Building Food Security and Climate Resilien — Chief Minister Devendra Fadnavis presented Maharashtra’s Maha Agri AI Policy 2025-2029, emphasizing the shift from demon…
S19
Capacity development — The urgency for capacity development could be addressed by providingjust-in-time learning as a part of policy processes.
S20
Digital Inclusion Through a Multilingual Internet | IGF 2023 WS #297 — In conclusion, Universal Acceptance is not just a technical matter but a human one that requires inclusivity and priorit…
S21
Open Forum #60 Cooperating for Digital Resilience and Prosperity — Implementation challenges exist between excellent policies and practical application, requiring focus on capacity buildi…
S22
AI as a tech ally in saving endangered languages — Benchmarks matter. Many AI systems are evaluated primarily on English and other major languages. Without proper testing,…
S23
Nepal Engagement Session — So Alokji, let’s talk a bit about Sabha Saar Impact. Let’s let our audience know about it. And with its launch on 14th A…
S24
Transforming Rural Governance Through AI: India’s Journey Towards Inclusive Digital Democracy — The expansion of language support remains an ongoing challenge and opportunity. Currently, Bhashini is being enhanced to…
S25
Collaborative AI Network – Strengthening Skills Research and Innovation — Data ecosystems approach breaks silos by creating thematic interoperability across ministries for specific policy areas …
S26
WS #97 Interoperability of AI Governance: Scope and Mechanism — Mauricio Gibson: People hear me? Yes. Thank you all for having me. It’s a pleasure to be here. I’m going to build on w…
S27
How AI Drives Innovation and Economic Growth — The speakers show broad agreement on AI’s transformative potential for development but significant disagreements on impl…
S28
Open Forum #3 Cyberdefense and AI in Developing Economies — Capacity Building and Human Resources Effective capacity building requires training at multiple levels – technical trai…
S29
Building Scalable AI Through Global South Partnerships — The pathway concept recognizes that successful AI implementation involves much more than technical development. It requi…
S30
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — International cooperation and knowledge sharing are crucial for scaling capacity building efforts, particularly for deve…
S31
Open Forum #17 AI Regulation Insights From Parliaments — Capacity Building and Education Capacity building and education are essential for all stakeholders Development | Capac…
S32
Building Trust through Transparency — Nevertheless, amidst these challenges, global civil society emerges as a beacon of hope. It possesses the opportunity to…
S33
Ad Hoc Consultation: Monday 5th February, Afternoon session — Discussions have also highlighted a shared concern regarding redundancies in legal texts. A neutral consensus has emerge…
S34
Open Forum #66 the Ecosystem for Digital Cooperation in Development — Tale Jordbakke: First of all, thank you for having NORAD in this panel. In NORAD, we believe that achieving the SDGs can…
S35
Blended Finance’s Broken Promise and How to Fix It / Davos 2025 — Despite their different institutional backgrounds, both speakers emphasize the need for tailored, context-specific appro…
S36
Closure of the session — The delegation advocated for a holistic mechanism, ensuring confidence-building between states, enhancing cybersecurity …
S37
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — While both speakers support responsible development, Kaur advocates for active government stewardship and support for pu…
S38
UNESCO Global Report — © Nevada Center for Excellence in Disabilities (NCED) and Enabled Nevada, University of Nevada, Reno (USA). ## 1.1 Pur…
S39
Foreword — – One threshold to establish from the outset is the minimum key features of devices that will enable people to use the i…
S40
NATIONAL INFORMATION AND COMMUNICATION TECHNOLOGY POLICY — Electronic payment systems are the cornerstone of E-Commerce development in the country by ensuring convenien…
S41
Exploring Digital Transformation for Economic Empowerment in Africa: Opportunities, Challenges, and Policy Priorities (International Trade and Research Centre, Nigeria) — A youthful population and widespread mobile phone use have spurred this growth. This growth has been propelled by the w…
S42
AI Meets Agriculture Building Food Security and Climate Resilien — Low to moderate disagreement level with significant implications for AI governance in agriculture. The differences in ap…
S43
Artificial intelligence (AI) – UN Security Council — The discussions across various sessions highlighted several risks associated with the over-reliance on AI-powered conten…
S44
Driving Social Good with AI_ Evaluation and Open Source at Scale — Moderate disagreement with significant implications. The disagreements reflect deeper tensions between technical efficie…
S45
UNSC meeting: Artificial intelligence, peace and security — Brazil:Thank you, Mr. President, Mr. President, dear colleagues. I thank the Secretary General for his briefing today an…
S46
Transforming Rural Governance Through AI: India’s Journey Towards Inclusive Digital Democracy — – Shri Alok Prem Nagar- Amit Kumar Language barriers prevented rural citizens from understanding governance processes, …
S47
Leaders TalkX: Local Voices, Global Echoes: Preserving Human Legacy, Linguistic Identity and Local Content in a Digital World — NK Goyal, President of the CMAI Association of India, presented a series of strategies for digital empowerment, includin…
S48
Building the Workforce_ AI for Viksit Bharat 2047 — We know we have 5 .8 million professionals. For example, the Tata AI Saki Immersion Programme is empowering rural women …
S49
Nepal Engagement Session — Sabha Saar has revolutionized meeting documentation by reducing the time burden on panchayat secretaries from 65% of the…
S50
https://dig.watch/event/india-ai-impact-summit-2026/nepal-engagement-session — Sabha Saar was one thing that we carried out for the convenience of the panchayats and the panchayat secretaries as oppo…
S51
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — It’s chips, chips and computing infrastructure. The next layer above it is the cloud infrastructure, the cloud services….
S52
Trump and tech: After 100 days — Continuity, institutions, and political cycles Participants generally agreed that despite shifts in leadership or politi…
S53
MASTERPLAN FLAGSHIP PROGRAMMES — To create this plan, the government will convene an interagency AI task force comprised of National Government agencies,…
S55
Opening of the session — Expanding capacity building programs
S58
Open Forum #60 Cooperating for Digital Resilience and Prosperity — Implementation challenges exist between excellent policies and practical application, requiring focus on capacity buildi…
S59
https://dig.watch/event/india-ai-impact-summit-2026/press-briefing-by-hmit-ashwani-vaishnav-on-ai-impact-summit-2026-l-day-5 — As I said, already five lakh plus visitors have already, we were just doing the estimate, I think actual number is about…
S60
Opening of the session — Acknowledges the exchange of ideas and negotiation process
S61
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 (continued)/ part 4 — This honest acknowledgment shifted the dynamic from delegates criticizing the text to understanding the constraints the …
S62
Fireside Conversation: 01 — The conversation maintained an optimistic and collaborative tone throughout, with both speakers expressing enthusiasm ab…
S63
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S64
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S65
Building Inclusive Societies with AI — The discussion maintained a constructive and solution-oriented tone throughout, characterized by: The tone remained con…
S66
Main Session 3 — The tone was overwhelmingly positive and celebratory, with participants expressing genuine affection for and commitment …
S67
WS #144 Bridging the Digital Divide Language Inclusion As a Pillar — Christian Daswon: Well, I think that’s exactly why the organization that we’re trying to build is focused on listening. …
S68
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — The discussion maintained a collaborative and constructive tone throughout, with panelists generally agreeing on core pr…
S69
Smart Regulation Rightsizing Governance for the AI Revolution — These key comments fundamentally shaped the discussion by establishing a realistic, pragmatic framework for AI governanc…
S70
Bridging the AI innovation gap — The tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-lookin…
S71
Democratizing AI: Open foundations and shared resources for global impact — The tone was consistently collaborative, optimistic, and forward-looking throughout the discussion. Speakers maintained …
S72
Global Perspectives on Openness and Trust in AI — And China using open source is actually very interesting because open source has a number of benefits and also risks. I …
S73
Closing Ceremony — Anil Kumar Lahoti: Good afternoon, Ministers, Excellencies, Ladies and Gentlemen. I’m not giving a presentation. I’m jus…
S74
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 (continued) – session 1 — The UK spelling and grammar have been accounted for in this revision. [Note: The initial instructions stated that UK sp…
S75
High Level Dialogue with the Secretary-General — Barriers to Meaningful Participation Speakers identified several obstacles to meaningful youth engagement. Frias highli…
S76
WS #189 AI Regulation Unveiled: Global Pioneering for a Safer World — Ananda Gautam: Thank you so much for describing the role of civil society. I think Juliana has started doing that. So…
S77
Criss-cross of digital margins for effective inclusion | IGF 2023 Town Hall #150 — Pavel Farhan:goal. Thank you. All right. Hi again, this is Pavel for The Record. I guess the benefit of going last is At…
S78
ElevenLabs Voice AI Session &amp; NCRB/NPMFireside Chat — The discussion revealed the plugin’s sophisticated capabilities developed in response to diverse real-world requirements…
S79
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Abhishek Agarwal: Thank you, Minister. Abhishek? Yeah, I kind of echo the views of Her Excellency, like the three key in…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Shri Alok Prem Nagar
11 arguments144 words per minute3297 words1372 seconds
Argument 1
Translation of eGram Swaraj portals and meeting minutes into local languages empowers panchayat users (Alok)
EXPLANATION
Alok explains that the eGram Swaraj portal originally operated only in English, limiting accessibility for rural officials. By using the Bhashini language AI, portal content and meeting minutes can be rendered in each panchayat’s native language, enabling users to understand and act on information directly.
EVIDENCE
He notes that the portal works in English [3] and describes attending a Gram Sabha where he could not understand the proceedings, highlighting the language barrier [4-7]. He then introduces Bhashini, showing how a panchayat member can view the expenses page in their own language with a click, calling it “magic” [16-18]. Later he explains that the Sabha Sar tool creates draft minutes from audio/video recordings and translates them back into the local language for editing and upload [22-23].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Language barriers and the need for multilingual portals are documented in S1, while S14 highlights broader language‑AI accessibility efforts.
MAJOR DISCUSSION POINT
Language AI enables inclusive access to governance information
AGREED WITH
Shri Amit Kumar, Moderator
Argument 2
Automated voice‑to‑text summarization reduces time spent on minutes, improves record‑keeping and transparency (Alok)
EXPLANATION
Alok describes the Sabha Saar tool that automatically converts audio or video recordings of Gram Sabha meetings into draft minutes, dramatically cutting the time secretaries spend on documentation. This automation improves the completeness and availability of meeting records, fostering greater transparency.
EVIDENCE
He reports that panchayat secretaries struggled to produce minutes on time, prompting the development of a tool that generates a draft minute from recordings using Bhashini [21-23]. He further details the simple workflow: record the meeting on a mobile device, upload the file, and receive a draft in the local language for finalisation [60-64]. Adoption data show that states like Odisha, Tamil Nadu and Tripura are already using the tool to track activities after minutes are created [66-70].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Voice‑to‑text challenges and the need for human oversight are discussed in S8, and the Sabha Sar summarisation tool is described in S1.
MAJOR DISCUSSION POINT
AI‑driven summarisation streamlines governance documentation
Argument 3
Simple, low‑cost tools (mobile phone recordings) enable rapid statewide onboarding, exemplified by Uttar Pradesh’s 59,000 panchayats (Alok)
EXPLANATION
Alok argues that using readily available devices such as mobile phones for recording meetings eliminates the need for expensive hardware, allowing fast adoption across large numbers of gram panchayats. He cites Uttar Pradesh’s successful onboarding of 59,000 panchayats to eGram Swaraj in just 40 days as proof of concept.
EVIDENCE
He explains that the only requirement is a recording device, often a mobile phone, and that the process sidesteps connectivity issues by uploading later [61-64]. He then recounts Uttar Pradesh’s rapid rollout, noting that all 59,000 gram panchayats completed registration, digital signing and migration away from checkbooks within 40 days [107-111].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The frugal mobile‑phone approach and Uttar Pradesh’s 59,000‑panchayat rollout are detailed in S2 (and reiterated in S1).
MAJOR DISCUSSION POINT
Low‑cost mobile solutions accelerate AI adoption at scale
AGREED WITH
Shri Amit Kumar
Argument 4
Caution in extending solutions to other ministries; focus on institutional integration and leveraging existing portals (Alok)
EXPLANATION
Alok warns against a blanket application of the eGram Swaraj model to other ministries, emphasizing the need to respect existing robust systems and to centre the institution (panchayat) rather than just the portal. He suggests building on current schemes and integrating AI through established channels.
EVIDENCE
He states he is not in a position to advise other ministries because they have their own robust systems, citing examples such as the MGNREGA and PM Awas Yojana portals [138-144]. He highlights that Panchayati Raj’s emphasis is on strengthening the institution and its accounting/planning functions, and mentions collaborations with the meteorological department for daily forecasts accessible via Bhashini [145-148].
MAJOR DISCUSSION POINT
Tailored integration respects existing ministry architectures
AGREED WITH
Shri Amit Kumar
Argument 5
Extending AI to service‑request routing, spatial development planning, WhatsApp chatbot (Pancham), and image‑based issue detection (Alok)
EXPLANATION
Alok outlines future expansions of AI beyond meeting minutes, including routing citizen service requests, generating spatial development plans, using a WhatsApp‑based chatbot (Pancham) for two‑way communication, and employing computer‑vision to detect infrastructure issues from images. These extensions aim to make local governance more responsive and data‑driven.
EVIDENCE
He describes mechanisms for citizens to query service availability via common service centers and the Meri Panchayat app, which can capture images of issues and automatically assign them to the responsible department using AI [201-210]. He also details a pilot in Guwahati using cameras on buses to label drains or potholes, and a spatial planning initiative for 34 gram panchayats that later influenced Andhra Pradesh’s statewide planning [259-272]. Finally, he mentions Pancham, a WhatsApp-based chatbot that enables two-way conversation with sarpanchas and secretaries [273-275].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Pancham WhatsApp chatbot and spatial planning pilots are mentioned in S2, and Andhra Pradesh’s statewide spatial plans enabled by AI are noted in S10.
MAJOR DISCUSSION POINT
AI integration broadens service delivery and planning capabilities
Argument 6
Transparent financial dashboards and searchable records enable citizens to monitor plans, expenditures, and asset status (Alok)
EXPLANATION
Alok points out that the eGram Swaraj portal now provides detailed, searchable financial information—including plans, execution status, bills, payments, and geotagged assets—allowing any citizen to audit gram panchayat finances. This transparency empowers citizens to hold officials accountable.
EVIDENCE
He explains that users can drill into gram panchayat records to see finance commission grants, plans, execution percentages, bill status, payment completion, asset locations and geotags, and even view them on Gram Manchitra [40-41]. Earlier he also highlighted the ability to view expense pages in the local language via Bhashini, making financial data understandable to non-English speakers [16-18].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Financial dashboards and searchable records for citizen oversight are outlined in S1.
MAJOR DISCUSSION POINT
Open financial data strengthens citizen oversight
Argument 7
Capacity‑building programmes are crucial for scaling AI tools across gram panchayats.
EXPLANATION
Alok stresses that without systematic training, officials cannot effectively adopt AI‑enabled platforms, so a dedicated capacity‑building programme is essential for nationwide rollout.
EVIDENCE
He notes the need to intensify capacity building through a training programme that started the previous year, describing the journey as “incredible” and saying it is being adopted all over the country. [41-43]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Capacity‑building initiatives are referenced in S2, and the ICT security capacity‑building catalogue provides additional context in S9.
MAJOR DISCUSSION POINT
Training drives AI adoption
Argument 8
AI‑driven analysis of drone survey data unlocks village‑level solar‑energy planning.
EXPLANATION
By processing the dense point‑cloud information from Swamitva drone surveys, AI identifies rooftops and calculates solar‑panel potential, which is then linked to the PM Surigarh Yojana portal, enabling gram panchayats to plan renewable‑energy installations.
EVIDENCE
Alok explains that AI converted rooftop images into solarisation potential and integrated this with the PM Surigarh Yojana portal, allowing gram panchayats to view roof-wise panel capacity for 2.38 lakh panchayats. [30-33]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Drone survey data analysis for planning is illustrated in S12, showing AI processing of high‑resolution imagery.
MAJOR DISCUSSION POINT
AI enables renewable‑energy planning at the grassroots
Argument 9
Integrating AI with meteorological data provides daily weather forecasts to every gram panchayat.
EXPLANATION
A partnership with the meteorological department delivers localized, daily weather forecasts that citizens can access on their phones via Bhashini, enhancing the relevance of governance services.
EVIDENCE
Alok mentions hooking up with the meteorological department so that daily forecasts are generated for every gram panchayat and can be viewed on phones using Bhashini. [146-148]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI‑enhanced weather forecasting models and daily forecasts are reported in S13.
MAJOR DISCUSSION POINT
AI‑enabled weather information for local governance
Argument 10
Extending Bhashini to Village Water Committee meetings demonstrates cross‑sector applicability of language AI.
EXPLANATION
The Department of Drinking Water and Sanitation approached the team to use Bhashini for its village water committee (VWC) meetings, showing that the same language‑AI infrastructure can serve sectors beyond panchayat administration.
EVIDENCE
Alok reports that the drinking water and sanitation department wants to use Bhashini for VWC meetings and that initial interactions between the two teams have already taken place. [98-99]
MAJOR DISCUSSION POINT
Cross‑sector expansion of language AI
Argument 11
Bhashini simplifies multilingual official correspondence, strengthening inter‑state coordination.
EXPLANATION
Using Bhashini, Alok could draft letters to states in their native languages and receive replies in those languages, streamlining bureaucratic communication across linguistic boundaries.
EVIDENCE
He recounts that Bhashini enabled him to write letters to states in their languages and that he received a letter in Telugu for the first time, illustrating AI-facilitated multilingual communication. [117-119]
MAJOR DISCUSSION POINT
AI simplifies multilingual governance communication
S
Shri Amit Kumar
7 arguments176 words per minute2680 words910 seconds
Argument 1
AI must be affordable, accessible to rural populations, and avoid urban‑centric elitism (Amit)
EXPLANATION
Amit stresses that AI solutions for governance must be low‑cost and usable with devices already owned by villagers, ensuring that the 900 million rural citizens are not excluded. He warns against treating AI as an elite, urban‑only technology.
EVIDENCE
He notes that AI should not be limited to urban or industrial sectors and must include the 900 million rural population, emphasizing frugality and the fact that gram panchayats need only a mobile phone to record and upload data [75-84]. He also describes the danger of an elitist approach that positions AI solely for cities, industries or passports, calling for inclusive design [78-80].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Affordability and rural‑focused design using existing phones are emphasized in S2, with S14 underscoring inclusive language‑AI design.
MAJOR DISCUSSION POINT
Inclusive, low‑cost AI for rural India
AGREED WITH
Shri Alok Prem Nagar, Moderator
Argument 2
Structured documentation fosters accountability, changes behaviour, and builds a culture of openness (Amit)
EXPLANATION
Amit argues that systematic, AI‑generated documentation creates accountability by making decisions and expenditures visible, which in turn alters officials’ behavior and cultivates a transparent governance culture. He links this to broader democratic participation.
EVIDENCE
He states that documentation will change the way gram panchayats work, improve accountability and transparency, and shift cultural expectations about note-taking and reporting [93-95]. He also mentions that AI tools like Pramana, Sabha Sar and Pancham will help democratise governance and that the experience will energise others [96-97].
MAJOR DISCUSSION POINT
Documentation as a driver of accountability and cultural change
Argument 3
Overcoming resistance requires training, human‑in‑the‑loop safeguards, and supportive policies to ensure reliable AI use (Amit)
EXPLANATION
Amit highlights that initial resistance to AI can be mitigated through capacity‑building, policies that embed human oversight, and clear guidelines. He stresses that these measures are essential for trustworthy and reliable AI deployment in rural governance.
EVIDENCE
He acknowledges early challenges and resistance, noting the need for human-in-the-loop safeguards and the ability to correct AI outputs, as well as the importance of training and policy support [85-89]. He also references capacity-building programmes started the previous year that have helped adoption [41-43].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Human‑in‑the‑loop safeguards and the need for training are highlighted in S8, while S9 discusses capacity‑building frameworks supporting policy.
MAJOR DISCUSSION POINT
Training, oversight and policy as pillars for AI adoption
Argument 4
API‑based, modular design with open standards prevents vendor lock‑in and supports long‑term scalability (Amit)
EXPLANATION
Amit advocates for an open, API‑centric architecture that uses interoperable standards, enabling different ministries to integrate AI components without being tied to a single vendor. This design ensures sustainability and scalability of AI initiatives.
EVIDENCE
He describes the need for open architecture, standards, modular APIs, and the ability to shift technology stacks, emphasizing data residency within India and the capacity to retrain models on new platforms [165-168]. He contrasts early monolithic POCs with current API-based, integrable applications and calls for similar approaches for AI [182-186].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open‑standard data commons and modular AI architectures are discussed in S11, supporting API‑based, vendor‑neutral design.
MAJOR DISCUSSION POINT
Open, modular architecture for sustainable AI
Argument 5
Building sovereign AI infrastructure, domestic LLMs, and scaling AI initiatives across ministries at population scale (Amit)
EXPLANATION
Amit outlines India’s ambition to develop home‑grown AI models and infrastructure, reducing dependence on foreign technology while leveraging the country’s experience with large‑scale digital programmes. He asserts that this sovereign approach will enable population‑scale AI deployment across ministries.
EVIDENCE
He references India’s track record with Aadhaar, UPI, FASTag and GST as foundations for scaling AI, and notes the development of domestic LLMs and open-source models, stressing the need for sovereignty and cost-effective solutions [223-236]. He also mentions ongoing work on AI infrastructure, chips and the intention to accelerate scaling after early learning phases [237-242].
MAJOR DISCUSSION POINT
Sovereign AI ecosystem for nation‑wide deployment
Argument 6
Public‑domain AI outputs increase citizen participation, reinforce accountability, and require guardrails for trust (Amit)
EXPLANATION
Amit contends that making AI‑generated information publicly available empowers citizens to engage with governance processes, thereby enhancing accountability. He also stresses the necessity of safeguards, human oversight and ethical guardrails to maintain public trust.
EVIDENCE
He notes that AI outputs placed in the public domain enable citizens to monitor meetings, budgets and services, increasing participation and accountability, while emphasizing the need for guardrails, human-in-the-loop checks and monitoring mechanisms to ensure trustworthiness [92-95][241-247].
MAJOR DISCUSSION POINT
Open AI outputs boost participation but need ethical safeguards
Argument 7
Strong data‑privacy frameworks and data residency within India are essential for trustworthy AI deployment in rural governance.
EXPLANATION
Amit argues that policies such as the DPDP Act, which enforce consent‑based data usage, together with keeping data on Indian servers, create the confidence needed for large‑scale AI adoption while protecting citizens’ rights.
EVIDENCE
He references the evolution of privacy policies, the DPDP Act, consent-based usage, and stresses that data residency must be within India to ensure a trustworthy AI ecosystem. [223-227]
MAJOR DISCUSSION POINT
Privacy and data residency as trust anchors
M
Moderator
1 argument133 words per minute640 words286 seconds
Argument 1
Language AI reaches the last‑mile citizen, delivering benefits directly in local languages.
EXPLANATION
The moderator highlights that language‑AI technology bridges the gap to the most remote citizens, ensuring that public services and information become accessible in the languages and dialects people actually use.
EVIDENCE
He states, “Actually, it reaches the last mile citizen when you talk about those benefits,” and adds, “So India’s last mile operates in local languages and dialects, as you mentioned, solving that problem.” [34-35]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of language AI in reaching last‑mile citizens is described in S1, and S14 reinforces its cross‑dialect accessibility.
MAJOR DISCUSSION POINT
Language AI bridges last‑mile gap
Agreements
Agreement Points
Language AI enables last‑mile inclusion and participation in Gram Sabha governance
Speakers: Shri Alok Prem Nagar, Shri Amit Kumar, Moderator
Translation of eGram Swaraj portals and meeting minutes into local languages empowers panchayat users (Alok) AI must be affordable, accessible to rural populations, and avoid urban‑centric elitism (Amit) Language AI reaches the last‑mile citizen, delivering benefits directly in local languages (Moderator)
All participants stress that multilingual AI (Bhashini) breaks language barriers, allowing villagers and officials to understand portal data, meeting minutes and services in their own languages, thereby expanding participation and trust [16-18][22-23][75-84][34-35].
POLICY CONTEXT (KNOWLEDGE BASE)
The deployment of Bhashini ASR-powered voice-to-text meeting summarisation for Gram Sabha meetings in Nepal demonstrates a policy focus on inclusive digital democracy and language support for local governance [S23][S24].
Capacity‑building and training are essential for scaling AI tools in rural governance
Speakers: Shri Alok Prem Nagar, Shri Amit Kumar
Capacity‑building programmes are crucial for scaling AI tools across gram panchayats (Alok) Overcoming resistance requires training, human‑in‑the‑loop safeguards, and supportive policies (Amit)
Both speakers highlight that without systematic training and capacity-building, panchayat officials cannot adopt AI-enabled platforms; programmes started last year have been described as “incredible” and necessary to overcome early resistance [41-43][85-89].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple policy discussions stress multi-level capacity building for AI adoption, including technical and policy training for officials and frontline workers [S28][S30][S31].
Low‑cost mobile‑phone based solutions enable rapid, large‑scale onboarding
Speakers: Shri Alok Prem Nagar, Shri Amit Kumar
Simple, low‑cost tools (mobile phone recordings) enable rapid statewide onboarding, exemplified by Uttar Pradesh’s 59,000 panchayats (Alok) AI must be affordable and usable with devices already owned by villagers; gram panchayats need only a mobile phone (Amit)
Both agree that leveraging ubiquitous mobile phones eliminates expensive hardware needs and allowed Uttar Pradesh to register 59,000 gram panchayats in 40 days, demonstrating frugal scalability [61-64][107-111][82-84].
POLICY CONTEXT (KNOWLEDGE BASE)
Studies on digital transformation in Africa highlight widespread affordable mobile phones as a catalyst for scaling digital services, underscoring the need for minimum device standards [S41][S39].
Structured documentation (minutes, financial dashboards) improves transparency and accountability
Speakers: Shri Alok Prem Nagar, Shri Amit Kumar
Automated voice‑to‑text summarisation reduces time spent on minutes and improves record‑keeping (Alok) Transparent financial dashboards and searchable records enable citizens to monitor plans, expenditures and assets (Alok) Structured documentation fosters accountability, changes behaviour and builds a culture of openness (Amit)
Both speakers assert that AI-generated minutes and searchable financial data make governance more open, allowing any citizen to audit spending and track implementation, thereby strengthening accountability [21-23][40-41][93-95].
Human‑in‑the‑loop oversight and editability of AI outputs are necessary
Speakers: Shri Alok Prem Nagar, Shri Amit Kumar
The Sabha Sar tool provides a draft minute that can be edited before upload (Alok) Human‑in‑the‑loop safeguards allow correction of AI outputs and maintain trust (Amit)
Both emphasize that AI should augment, not replace, human judgement; users can edit generated minutes and have mechanisms to correct AI results, ensuring reliability and trustworthiness [60-64][85-89][241-247].
POLICY CONTEXT (KNOWLEDGE BASE)
Governance literature notes the tension between full automation and necessary human oversight to ensure cultural sensitivity and accountability in AI-generated content [S42][S44].
Open, modular architecture with interoperable standards prevents vendor lock‑in and supports long‑term sustainability
Speakers: Shri Alok Prem Nagar, Shri Amit Kumar
Caution in extending solutions to other ministries; focus on institutional integration and leveraging existing portals (Alok) API‑based, modular design with open standards avoids vendor lock‑in and ensures scalability (Amit)
Both agree that AI solutions must be built on open, standards-based, API-centric architectures that can interoperate with existing ministry systems, ensuring sustainability and avoiding dependence on a single vendor [138-144][165-186].
POLICY CONTEXT (KNOWLEDGE BASE)
Government-level AI networks advocate interoperable data ecosystems and modular standards to avoid lock-in and enable cross-ministerial reuse [S25][S26].
Similar Viewpoints
Both stress frugal, phone‑based solutions as the cornerstone for inclusive AI deployment in villages, rejecting high‑cost, urban‑focused models [61-64][107-111][75-84][82-84].
Speakers: Shri Alok Prem Nagar, Shri Amit Kumar
AI must be affordable, accessible to rural populations, and avoid urban‑centric elitism (Amit) Simple, low‑cost tools (mobile phone recordings) enable rapid statewide onboarding (Alok)
Both view AI‑generated documentation as a driver of accountability, transparency and cultural change in governance practices [21-23][40-41][93-95].
Speakers: Shri Alok Prem Nagar, Shri Amit Kumar
Structured documentation fosters accountability (Amit) Automated voice‑to‑text summarisation and transparent financial dashboards improve record‑keeping (Alok)
Unexpected Consensus
Both speakers endorse open, modular architecture despite Alok’s earlier caution about extending solutions to other ministries
Speakers: Shri Alok Prem Nagar, Shri Amit Kumar
Caution in extending solutions to other ministries; focus on institutional integration (Alok) API‑based, modular design with open standards prevents vendor lock‑in (Amit)
While Alok initially warns against a one-size-fits-all approach, he nevertheless supports integration with existing robust systems, aligning with Amit’s call for open, interoperable architectures-an alignment that was not explicitly highlighted earlier in the discussion [138-144][165-186].
POLICY CONTEXT (KNOWLEDGE BASE)
Despite cautionary remarks, the shared endorsement aligns with collaborative AI network calls for universal standards while recognizing ministry-specific concerns [S25][S27].
Overall Assessment

The discussion shows strong convergence among the participants on the need for multilingual, low‑cost AI tools that are supported by capacity‑building, human oversight, and open, standards‑based architectures. These shared positions span inclusion, transparency, scalability and sustainability.

High consensus – the speakers largely agree on the principles, technologies and policy approaches required to make AI‑enabled rural governance inclusive, accountable and future‑proof, suggesting a solid foundation for coordinated action across ministries.

Differences
Different Viewpoints
Level of automation versus required human oversight in AI‑enabled meeting summarisation
Speakers: Shri Alok Prem Nagar, Shri Amit Kumar
Alok describes the Sabha Saar tool as a “miracle” that automatically creates draft minutes from audio/video recordings, requiring only minimal user editing before upload [22-23][60-64]. Amit stresses that AI outputs must be subject to human-in-the-loop safeguards, correction mechanisms and guardrails to maintain trust and avoid fully autonomous decisions [85-89][241-247].
Alok presents the tool as largely self‑sufficient, whereas Amit argues that human review remains essential to ensure accuracy and accountability.
POLICY CONTEXT (KNOWLEDGE BASE)
The Bhashini-based meeting summarisation tool illustrates practical trade-offs between automated transcription and the need for human review to maintain accuracy and trust [S23][S42][S44].
Approach to scaling AI solutions across ministries and government systems
Speakers: Shri Alok Prem Nagar, Shri Amit Kumar
Alok cautions against a blanket extension of the eGram Swaraj model to other ministries, emphasizing respect for existing robust systems and focusing on institutional (panchayat) integration [138-144]. Amit advocates for an open, API-based, modular architecture with interoperable standards to enable cross-ministry use and avoid vendor lock-in, stressing long-term sustainability [165-186].
Alok prefers ministry‑specific, cautious integration, while Amit pushes for a universal, open‑architecture framework for AI across government.
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on AI interoperability highlight divergent views on universal standards versus tailored ministry solutions, reflecting ongoing policy debates on scaling frameworks [S25][S26][S27][S29].
Unexpected Differences
Cautious stance on advising other ministries versus push for open, interoperable AI across ministries
Speakers: Shri Alok Prem Nagar, Shri Amit Kumar
Alok explicitly states he is not in a position to advise other ministries because they have robust systems of their own [138-144]. Amit, however, calls for open architecture and modular APIs that can be adopted by any ministry, emphasizing interoperability and avoiding vendor lock-in [165-186].
Alok’s reluctance to extend the model beyond Panchayati Raj was not anticipated given Amit’s broader vision for cross‑ministerial AI integration.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension mirrors documented disagreements in AI governance forums where some officials advocate restrained advisory roles while others promote open, cross-ministerial AI ecosystems [S25][S27][S44].
Overall Assessment

The discussion shows strong consensus on the need for language AI, transparency and capacity building, but reveals moderate disagreement on how much automation should be trusted without human review and on the strategy for scaling AI across ministries. These divergences are more about implementation philosophy than about end goals.

Moderate disagreement; while goals are aligned, differing views on automation safeguards and cross‑ministerial architecture could affect the speed and uniformity of AI rollout in rural governance.

Partial Agreements
The speakers share the same goals of linguistic inclusion, capacity development and transparency, but differ in emphasis on implementation details.
Speakers: Shri Alok Prem Nagar, Shri Amit Kumar
Both agree that translating portal content and meeting minutes into local languages empowers citizens and panchayat officials (Alok: translation of eGram Swaraj and Sabha Sar; Amit: need for inclusive, low-cost AI) [16-18][75-84]. Both stress the importance of capacity-building and training to achieve nationwide adoption (Alok: intensify capacity building programme [41-43]; Amit: training programmes started the previous year to overcome resistance [85-89]). Both highlight that transparent, searchable financial data on the portal enables citizen oversight (Alok: drill-down to finance commission grants, asset geotags [40-41]; Amit: documentation improves accountability and transparency [93-95]).
Takeaways
Key takeaways
Language AI (Bhashini) enables Gram Panchayats to access eGram Swaraj data and meeting minutes in local languages, improving inclusivity and participation. The Sabha Saar tool automates voice‑to‑text summarisation of Gram Sabha meetings, drastically reducing the time secretaries spend on minute‑taking and enhancing transparency and record‑keeping. Simple, low‑cost solutions (e.g., mobile‑phone recordings) allow rapid statewide adoption, as demonstrated by Uttar Pradesh onboarding 59,000 Panchayats in 40 days. Structured documentation through AI changes Panchayat behaviour, fostering accountability, better monitoring of finances, assets, and project execution. Implementation challenges include limited internet connectivity, diverse dialects, need for training, and ensuring human‑in‑the‑loop safeguards. Open, API‑based architecture and modular design are essential to avoid vendor lock‑in, ensure interoperability, and support long‑term scalability across ministries. Future integrations envisioned: AI‑driven service‑request routing, image‑based issue detection, spatial development planning visualisations, and WhatsApp‑based chatbot (Pancham). Building sovereign AI infrastructure with domestic LLMs and open standards is critical for population‑scale deployment while maintaining data residency and trust. Transparent financial dashboards and searchable records empower citizens to monitor plans, expenditures, and assets, strengthening public trust.
Resolutions and action items
Expand Bhashini language coverage by adding at least 11 more regional languages (e.g., Assamese, Boro, Maithili, Santali). Roll out capacity‑building and training programmes for Panchayat officials to use eGram Swaraj, Sabha Saar, and related AI tools. Integrate solar‑potential data from Swamitva drone surveys with the PM Surigarh Yojana portal for coordinated renewable‑energy campaigns. Pilot AI‑enabled service‑request routing (image analysis) and spatial development plan visualisations in selected states. Deploy the Pancham WhatsApp chatbot for two‑way communication with Sarpanches and Panchayat secretaries nationwide. Collaborate with the Department of Drinking Water and Sanitation to extend Bhashini‑based transcription to Village Water Committee meetings. Establish a governance framework with human‑in‑the‑loop review and complaint mechanisms for AI outputs.
Unresolved issues
Full coverage of all local dialects and languages still lacking; timeline and resources for completing language models remain unclear. Sustained connectivity in remote villages for uploading recordings and accessing AI services needs further solutions. Standardised protocols for inter‑ministerial data sharing and API integration have not been finalised. Mechanisms for continuous monitoring of AI accuracy, bias mitigation, and accountability beyond the initial rollout are not fully defined. Extent of AI autonomy versus human oversight in decision‑making processes remains an open policy question.
Suggested compromises
Adopt a hybrid model where AI generates draft minutes but a human reviewer finalises them, balancing efficiency with accuracy. Provide low‑cost, mobile‑phone‑based tools while allowing states to customise workflows, meeting states halfway on implementation requirements. Prioritise open‑standard, modular APIs to enable ministries to adopt AI components without being locked into a single vendor. Phase‑wise language expansion, starting with high‑population languages, while continuing to support existing languages through community contributions.
Thought Provoking Comments
I was there for something like 45 minutes and I was felicitated and sat on stage. I didn’t understand a thing. And then it struck me, how do you expect these people really to relate to what is happening? Because it is public money.
This personal anecdote highlighted the fundamental language barrier in rural governance, turning a bureaucratic success story into a human‑centred problem that needed a solution.
It set the stage for introducing Bhashini as the answer, shifting the conversation from describing existing portals to questioning their accessibility and prompting the moderator to ask about the role of language AI.
Speaker: Shri Alok Prem Nagar
By a click of a button, a panchayat person can see the expenses page in their own language – it was magic.
Shows a concrete, transformative use of AI that directly addresses the language gap identified earlier, illustrating the power of real‑time translation for transparency.
Catalyzed the discussion on how language AI can increase citizen participation, leading the moderator to probe the criticality of language AI for inclusive governance.
Speaker: Shri Alok Prem Nagar
We created Sabha Sar – if you input the video/audio recording of your meeting, you get a minuted draft which you can edit and upload. It solved the biggest pain point for panchayat secretaries.
Identifies a specific workflow bottleneck (meeting minutes) and demonstrates how AI can automate a traditionally labour‑intensive task, highlighting AI’s practical impact on governance efficiency.
Prompted deeper questions about structural changes post‑implementation and led Amit Kumar to discuss cultural shifts and the need for human‑in‑the‑loop oversight.
Speaker: Shri Alok Prem Nagar
We repurposed the dense point‑cloud data from Swamitva drone surveys to calculate rooftop solar potential, integrating it with the PM Surigarh Yojana portal.
Illustrates innovative reuse of existing data assets, turning a land‑recording exercise into an energy‑planning tool, thereby expanding AI’s value beyond its original scope.
Opened a new line of discussion about cross‑sectoral AI applications and inspired the moderator to ask about future integrations beyond Gram Sabha meetings.
Speaker: Shri Alok Prem Nagar
If you look at the frugality of the situation – we did not ask Gram Panchayat to invest anything, just a mobile phone. The system has a human‑in‑the‑loop provision for correction.
Emphasises low‑cost, inclusive design and acknowledges the necessity of human oversight, challenging any assumption that AI deployment must be expensive or fully autonomous.
Shifted the tone to address adoption challenges and cultural change, leading to a broader conversation about training, resistance, and the role of AI in rural contexts.
Speaker: Shri Amit Kumar
Uttar Pradesh onboarded 59,000 Gram Panchayats onto e‑Gram Swaraj in just 40 days – an ‘impossible task’ that proved possible when the product meets user needs.
Provides a powerful scalability benchmark, countering skepticism about large‑scale digital roll‑outs in rural India and reinforcing the importance of user‑centric design.
Reinforced the narrative of rapid, large‑scale adoption, encouraging other speakers to discuss lessons for ministries and the importance of open architecture.
Speaker: Shri Alok Prem Nagar
Open architecture and API‑based design are essential for long‑term sustainability and avoiding vendor lock‑in; we must build modular, interoperable AI platforms.
Highlights strategic technical considerations that go beyond immediate functionality, stressing sovereignty, future‑proofing, and the ability to integrate diverse AI use cases.
Steered the discussion toward systemic issues of governance technology, prompting Alok to acknowledge the need for interoperable services like Meri Panchayat and common service centres.
Speaker: Shri Amit Kumar
We need AI to not only translate but also to understand images of citizen‑reported issues (e.g., potholes, overflowing drains) and automatically route them to the responsible department.
Expands the vision of AI from language translation to computer‑vision‑driven service delivery, suggesting a next frontier for AI‑enabled rural governance.
Generated excitement about future integrations, leading the moderator to ask about the next phase of collaboration and prompting Amit to discuss scaling AI across ministries.
Speaker: Shri Alok Prem Nagar
India has already delivered population‑scale digital infrastructure (Aadhaar, UPI, GST). We can now deliver AI at similar scale, ten times cheaper than the West, with strong policy frameworks like DPDP.
Positions India as a global leader in large‑scale AI deployment, linking past successes to current capabilities and reinforcing confidence in scaling AI solutions.
Provided a concluding confidence boost, framing the entire discussion within a narrative of national capability and encouraging optimism about future AI roll‑outs.
Speaker: Shri Amit Kumar
Spatial development plans visualized with AI helped panchayats see future growth scenarios, leading to greater enthusiasm and adoption across Andhra Pradesh.
Demonstrates how AI‑driven visual storytelling can overcome resistance by making abstract plans tangible, highlighting the importance of user engagement and perception.
Added a concrete example of AI influencing planning and community buy‑in, reinforcing earlier points about visualization and leading to the final reflections on participatory governance.
Speaker: Shri Alok Prem Nagar
Overall Assessment

The discussion was driven forward by a series of vivid, experience‑based insights that moved the conversation from identifying language barriers to showcasing concrete AI solutions and their scalability. Alok’s real‑world anecdotes and success stories (language translation, meeting minutes, solar potential, rapid onboarding in UP, spatial planning) repeatedly opened new thematic avenues, while Amit’s reflections on frugality, cultural change, and open architecture added depth and strategic perspective. These pivotal comments reframed the dialogue from a simple description of tools to a broader debate on inclusivity, sustainability, and India’s capacity to lead at population scale, ultimately shaping a narrative of confidence and forward‑looking integration.

Follow-up Questions
How can Bhashini be expanded to cover additional local languages and dialects (e.g., Assamese, Boro, Maithili, Santal) to ensure all Gram Panchayats can use AI tools?
Current language coverage leaves many Panchayats unable to use the tool; expanding language support is essential for inclusive participation.
Speaker: Shri Alok Prem Nagar
What is needed to integrate AI‑powered image analysis into the Meri Panchayat mobile app so that citizen‑reported photos (e.g., potholes, drain overflows) are automatically classified and routed to the appropriate department?
Automated visual issue detection would streamline service delivery and enable timely escalation, but requires research on model accuracy, workflow integration, and offline capability.
Speaker: Shri Alok Prem Nagar
How can AI tools be linked with Common Service Centers (CSCs) across states to provide end‑to‑end service request tracking and real‑time status updates for citizens?
Connecting AI outputs with existing CSC infrastructure would give villagers transparent visibility of service progress, demanding study of integration points and data sharing protocols.
Speaker: Shri Alok Prem Nagar
What capacity‑building and training programs are required for Panchayat officials to effectively adopt Bhashini and other AI‑enabled applications?
Successful adoption hinges on user competence; systematic training curricula and evaluation mechanisms need to be designed and tested.
Speaker: Shri Alok Prem Nagar
How can AI be used to create, visualize, and communicate spatial development plans for villages, and what is the community’s response to such plans?
Spatial planning can guide future growth, but requires research on visualization tools, stakeholder engagement strategies, and impact on planning outcomes.
Speaker: Shri Alok Prem Nagar
Can Bhashini be applied to Village Water Committee (VWC) meetings for translation and summarization, and what adaptations are needed?
Extending AI to other rural governance bodies could improve transparency; needs assessment of domain‑specific terminology and workflow integration.
Speaker: Shri Alok Prem Nagar
What solutions can address connectivity challenges in remote villages to ensure reliable AI service delivery (e.g., offline processing, edge computing)?
Limited internet hampers real‑time AI use; research into low‑bandwidth or edge‑based architectures is critical for widespread adoption.
Speaker: Shri Alok Prem Nagar
What open‑architecture standards and interoperability frameworks should be defined for AI modules across ministries (e.g., MOPR, Rural Development, Agriculture) to avoid vendor lock‑in?
A common, modular architecture will enable reuse, reduce costs, and ensure long‑term sustainability across government platforms.
Speaker: Shri Amit Kumar
How can data sovereignty be ensured while allowing flexibility to shift infrastructure or AI models without disruption?
National security and continuity require mechanisms for data residency, model portability, and contingency planning for geopolitical risks.
Speaker: Shri Amit Kumar
What governance model balances human‑in‑the‑loop oversight with AI automation for Panchayat processes to maintain accountability while gaining efficiency?
Defining the right mix of automation and human review is essential to preserve trust and prevent over‑reliance on AI.
Speaker: Shri Amit Kumar
How can the accuracy of Bhashini’s translation and summarization across diverse languages be continuously monitored and improved?
Quality directly affects user trust; systematic evaluation, feedback loops, and model refinement strategies are needed.
Speaker: Shri Amit Kumar
What measurable impact has structured documentation (Sabha Saar) had on meeting frequency, agenda quality, transparency, and citizen participation in Gram Sabhas?
Understanding the effectiveness of Sabha Saar informs future scaling and highlights areas for improvement.
Speaker: Moderator, Shri Alok Prem Nagar
How does the scalability and cost‑effectiveness of AI solutions in rural governance compare with international benchmarks, and what lessons can be drawn?
Benchmarking against other countries validates investment decisions and guides efficient resource allocation.
Speaker: Shri Amit Kumar
What mechanisms should be established for grievance handling, escalation, and feedback within AI‑enabled service delivery to ensure timely resolution?
Effective escalation pathways are vital for citizen satisfaction and trust in automated systems.
Speaker: Shri Alok Prem Nagar
How can AI integration with meteorological data provide localized forecasts for Panchayats, and what practical applications would this enable?
Localized weather insights can aid agricultural planning and disaster preparedness, requiring research on data integration and user interfaces.
Speaker: Shri Alok Prem Nagar
What approaches can be used to generate AI‑driven audio/video messages (e.g., via the Pancham chatbot) for rapid two‑way communication with Sarpanches and secretaries?
Automated multimedia messaging could improve outreach, but needs study of content generation quality, language support, and delivery channels.
Speaker: Shri Alok Prem Nagar

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Panel Discussion AI & Cybersecurity _ India AI Impact Summit

Panel Discussion AI & Cybersecurity _ India AI Impact Summit

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session focused on launching and advancing the United Nations-backed Global Network of Centres for Exchange and Cooperation on AI Capacity Building, aimed at democratizing AI knowledge and keeping humans central to its development [1][7]. Indian officials highlighted domestic efforts to embed AI across education, from university curricula to third-grade school classes, and to retrain the existing workforce [2-4].


Amit Shukla warned that without coordinated action, AI could widen the gap between countries, especially affecting the Global South, and stressed the need for collective capacity-building [13-18]. He cited India’s long-standing ITEC programme, which has trained thousands of officials from 160 countries and offers about 10,000 fully funded in-person courses annually, including AI modules that will be expanded [20-24]. Shukla welcomed the new network, noting participation from 14 countries and the contribution of IIT Madras as the first Indian centre [33-36].


Abdurrahman Habib described Saudi Arabia’s Women Elevate initiative, which delivered a fully online AI training to 6,000 women in a year, achieving an 89 % certification rate and reaching over 86 countries [72-82]. He also highlighted Ethiopia’s AI Institute and the broader regional push to develop AI policy, curricula, and research capacity through collaborative networks [196-199]. Seydina Moussa explained that the network’s cooperation framework, first adopted in Dakar, enables centres to offer services, develop a “blueprint” for new centres, and plan multi-country projects [162-170].


Balaraman Ravindran argued that AI capacity building must teach not only technical skills but also how to use AI across all sectors, and that a scientific panel should engage a globally representative expertise base [117-124][134-136]. He projected that within five years the network could raise all participating nations to the highest AI-readiness tier, prompting the UN to revise its categorisation [235-239].


Vilas Dhar emphasized that the network creates institutional innovation, fostering cross-sector collaboration and translating AI governance frameworks into practice [255-263][268-270]. Anne Meldgaard reinforced the network’s role in bridging the digital divide by focusing on upskilling, reskilling, and inclusive community building, arguing that shared purpose and agency are essential for equitable AI adoption [297-304][311-319]. Representatives from Brazil confirmed national support and the enrollment of two Brazilian universities, linking the network to the Global Digital Compact and multilateral AI governance [337-346].


The discussion concluded that the network represents a concrete step toward equitable AI capacity worldwide, with commitments to expand participation, develop shared resources, and embed diversity and purpose in future AI initiatives [249][282-284].


Keypoints


Major discussion points


Inclusive AI education and capacity-building for the Global South, women, and youth – Governments are embedding AI curricula from primary school through higher education and retraining the existing workforce ([1-4]); the need to close the AI capacity divide is highlighted, with India’s ITEC programme offering thousands of fully-funded training slots, including AI courses ([13-24]); Saudi-run “Women Elevate” aims to certify 25 000 women (6 000 already completed) across 86 countries, achieving > 86 % certification rates ([72-84]); African labs (e.g., Ethiopia) stress regional collaboration to avoid being left behind and to share locally-relevant expertise ([191-205]).


Creation and rapid expansion of the Global Network of Centres for Exchange and Cooperation on AI Capacity Building – The network is positioned as a UN-backed initiative that brings together diverse regional expertise ([25-28]); 14 countries have already nominated institutions, with India’s IIT Madras taking the first step ([33-36]); the idea originated from a Saudi-Kenya call at the UN General Assembly and is being operationalised through joint centres in Saudi Arabia, Senegal, Ethiopia, etc. ([48-56]); a cooperation framework and blueprint for new centres are being drafted, with plans for further meetings in Riyadh ([162-169]); the network is seen as a catalyst for institutional innovation and AI governance practice ([255-267]).


Role of scientific and academic panels in providing evidence-based guidance and ensuring Global South representation – The scientific panel’s mandate is to deliver data-driven assessments of AI impacts, which requires broad expertise and capacity worldwide; otherwise its recommendations would be “futile” ([133-138]); panelists stress that capacity-building must enable all stakeholders to use AI, not just to develop it ([117-119]).


Vision for the network’s impact by 2030 and alignment with the UN 2030 Agenda – Participants envision a thriving global dialogue where every country contributes, with the network’s platform enabling exponential growth of training and knowledge sharing ([219-227]); they anticipate a re-classification of AI readiness such that all nations reach the highest tier ([235-239]); the goal is to distribute both compute and human talent so that no one is left behind, directly supporting the Sustainable Development Goals ([214-216]).


Overall purpose / goal of the discussion


The session was convened to review progress, galvanise commitment, and chart the way forward for the newly-launched Global Network of Centres for Exchange and Cooperation on AI Capacity Building. Speakers highlighted existing initiatives, announced new centre memberships, and called for coordinated, inclusive capacity-building that aligns with UN priorities (e.g., the 2030 SDGs) and bridges the AI divide between the Global North and South.


Overall tone and its evolution


The conversation maintained a constructively optimistic and collaborative tone throughout. It began with policy-driven statements emphasizing inclusivity ([1-4]), moved into a collective acknowledgment of challenges and the need for joint action ([13-18]), progressed to enthusiastic sharing of concrete achievements and network expansion ([25-36], [48-56]), and culminated in forward-looking, hopeful visions for 2030 and beyond ([219-239]). Intermittent remarks on diversity, community, and purpose (e.g., by Vilas Dhar and Anne Meldgaard) reinforced a tone of inclusive ambition rather than confrontation, underscoring a shared commitment to equitable AI development.


Speakers

Speakers (from the provided list)


Vilas Dhar – President, Patrick J. McGowan Foundation; member of the UN Secretary-General’s High-Level Advisory Board on AI - Philanthropy, AI policy, international AI governance [S1][S2][S3]


Anne Marie Engtoft Meldgaard – Technical Ambassador, Ministry of Foreign Affairs, Denmark - Digital diplomacy, AI governance, international cooperation [S4][S5][S6]


Balaraman Ravindran – Professor, Indian Institute of Technology Madras; member of the International Independent Scientific Panel on AI - AI research, AI education, capacity-building [S7][S8][S9]


Fitsum Assamnew Andargie – Representative, Ethiopia (AFRD Labs network) - AI capacity building, regional AI collaboration [S10]


Abdurrahman Habib – Representative, UNESCO Centre, Kingdom of Saudi Arabia - AI capacity building, women-empowerment in AI [S11][S12]


Seydina Moussa Ndiaye – Entrepreneur, educator; member of the UN High-Level Advisory Body on AI (Senegal) - AI policy, capacity building, digital inclusion [S13]


Eugenio Garcia – Ambassador for Technology and Innovation, Government of Brazil; Director for Science, Technology, Innovation & Intellectual Property, Brazil - AI governance, multilateral cooperation, capacity building [S14][S15][S16]


S. Krishnan – Secretary, Ministry of Electronics & Information Technology, Government of India - AI policy, AI education, national AI strategy [S17][S18][S19]


Mehdi Snene – Senior Advisor to the UN Secretary-General’s Tech Envoy; facilitator of the panel - AI governance, capacity-building, UN-level coordination [S20][S21]


Moderator – Session moderator (name not specified) - Session facilitation


Amit Shukla – Joint Secretary, Cyber Diplomacy Division, Ministry of External Affairs, Government of India - International AI diplomacy, AI capacity-building initiatives [S25][S26]


Additional speakers (not in the provided list)


Sri A. Revan Threaty – Honorable Chief Minister of Telangana, India - AI & cybersecurity policy, state-level AI implementation.


Full session reportComprehensive analysis and detailed insights

The opening remarks set the tone for the session, which was convened to launch the United Nations-backed Global Network of Centres for Exchange and Cooperation on AI Capacity Building. The moderator welcomed the audience and highlighted the aim of “democratising access to AI resources while keeping humans at the centre” [7]. S. Krishnan then outlined India’s domestic strategy to make AI education truly inclusive: the higher-education department is integrating AI into every university programme, the school-education department will introduce AI from the third-grade level [2-3], and existing workers will be retrained to adapt to an AI-enabled economy [4]. Krishnan’s remarks emphasized confidence that the new network will reinforce these national efforts [5-6].


Joint Secretary Amit Shukla positioned AI as a catalyst for welfare and economic growth, warning that “only countries with AI capabilities can reap the full benefits” and that, without coordinated action, the technology could widen the global divide [11-16]. He advocated a collective response to bridge the “AI capacity divide” that especially hampers the Global South [17-18] and cited India’s long-standing ITEC programme, which since 1964 has trained thousands of officials from 160 countries and now offers around 10 000 fully-funded in-person courses annually, including AI modules that will be expanded [20-24]. Shukla welcomed the UN-initiated network, noting that 14 countries-including Brazil, China, Ethiopia, Kenya, Saudi Arabia, Senegal and India-have already nominated institutions, with IIT Madras taking the first Indian step [33-36].


After a brief photo-op, the moderator announced that the Chief Minister of Telangana would deliver the upcoming keynote on AI & cybersecurity [??].


UN senior advisor Dr Mehdi Snene introduced the panel, thanking the speakers for setting the discussion on the network and outlining its genesis: a call from the Kingdom of Saudi Arabia during a General Assembly meeting in Kenya urged member states to build a “global network on AI capacity building that could truly leave no one behind” [48-56]. He then invited the panelists to elaborate on the network’s development and future direction [44].


Dr Abdurrahman Habib (Saudi Arabia) described the Women Elevate initiative, which aims to certify 25 000 women worldwide in AI. In its first year the programme delivered 6 000 online courses, achieving an 89 % completion-rate and awarding Microsoft AI-900 certificates; participants spanned 86 countries and included public-servant women in Kenya [72-84][85-94]. Habib highlighted the broader regional ambition to empower a young, eager population and to use the network to share programmes and success stories across the Global South [95-96].


Professor Balaraman Ravindran (IIT Madras) expanded the discussion from technical training to a broader definition of AI capacity. He argued that capacity-building should enable everyone to “use AI to do whatever you want better” rather than merely producing more researchers [117-119]. Ravindran warned that the scientific panel must engage a globally representative expertise base, otherwise its evidence-driven recommendations would be futile [133-138]. Looking ahead, he suggested that within five years the UN may have to revise its AI-readiness categories because many nations could reach the top tier [235-239].


Seydina Moussa Ndiaye (Senegal) outlined the co-operation framework adopted at the Dakar workshop, which allows each centre to list services on an “offer sheet” and is working toward a detailed “blueprint” for establishing new centres [162-170]. He announced plans for a third meeting in Riyadh before the July summit, signalling an intention to scale multi-country projects and deepen collaboration [176-177].


Fitsum Assamnew Andargie (Ethiopia/AFRD Labs) stressed the need for continental collaboration. He noted Ethiopia’s substantial investment in an AI Institute that is shaping national policy, curricula and research, and argued that the network enables African labs to “lean on our neighbours” to avoid being left behind [191-199][200-206].


Dr Mehdi Snene then posed the 2030-vision questions. Habib responded that the network could drive exponential growth in training and foster shared dialogue across regions [??]. Assamnew emphasized the need to develop both human skills and compute infrastructure so that “no-one is left behind” [??]. Ravindran reiterated that the UN may need to revise its AI-readiness categories as more countries achieve higher levels of capability [235-239].


Vilas Dhar (Patrick J. McGowan Foundation) highlighted the institutional innovation required to match the rapid pace of AI technology. He argued that the network provides a platform for building the institutions that will guide AI’s future, stressing that governments-not the private sector-must set policies that enable data sharing and regional centres of excellence [255-264]. Dhar linked this to the Global Digital Compact, asserting that the network helps translate high-level AI-governance frameworks into practical, “muscle-memory” collaboration [268-270][271-276].


Anne Marie Engtoft Meldgaard (Denmark) framed the network’s importance in terms of human-centred technology. She introduced four pillars-identity, community, agency and purpose-as essential for meaningful coexistence with AI [303-310]. Meldgaard argued that upskilling and reskilling, especially for women, bridge the digital divide and foster inclusive communities, noting that shared purpose and agency are needed to ensure technology serves humanity rather than the reverse [311-330].


Eugenio Garcia (Brazil) confirmed national support for the network, announcing that two federal universities-the Federal University of Pernambuco and the Federal University of Rio Grande do Sul-have already joined [342-345]. He linked Brazil’s participation to the Global Digital Compact and multilateral AI governance, pledging continued backing for the initiative [337-346].


Across the discussion, participants repeatedly agreed that inclusive AI education, capacity-building for women and youth, and multilateral cooperation are central to the network’s mission. They highlighted concrete programmes (India’s ITEC, Saudi Arabia’s Women Elevate, Ethiopia’s AI Institute) as models to be shared, and they endorsed the cooperation framework, offer sheet and blueprint as tools for scaling [1-4][20-24][72-84][162-170][255-264].


The panel identified several action items: finalising the cooperation framework and offer sheet; completing the blueprint for new centres; organising the Riyadh meeting; expanding AI courses within ITEC; scaling the Women Elevate target to 25 000 women and extending it to public-servant females; integrating Ethiopia’s AI Institute into the network; and formalising Brazil’s university participation [162-169][176-177][20-24][72-84][342-345]. Unresolved issues include the lack of detailed funding mechanisms, the need for robust metrics to monitor training outcomes and AI-readiness, insufficient representation of Global-South experts on the scientific panel, and the persistent compute-infrastructure gap for many countries [133-138][235-239][337-346].


In conclusion, the session portrayed the Global Network of Centres for Exchange and Cooperation on AI Capacity Building as a concrete step toward equitable AI development. By aligning national education reforms, international training programmes, gender-focused initiatives and multilateral governance frameworks, the network aspires to ensure that no country-or individual-remains behind in the AI era, thereby advancing a shared, human-centred future for artificial intelligence [249][282-284].


Session transcriptComplete transcript of the session
S. Krishnan

Industry bodies, we are working on retraining. Through the higher education department, we are looking at making sure that AI is taught across all courses in all universities and all institutions so that everyone, irrespective of which branch they study, are aware of how AI can make a difference to them. And our school education department has announced as a matter of policy that AI would be taught to school children right from class three, from third grade. So in that sense, we are looking to make AI truly inclusive and train the next generation to adapt to AI and ensure that those who have already joined the work stream are also retrained for this purpose. I’m once again delighted that this event is taking place and it will generate more commitments to further strengthen this global network of institutions.

Thank you very much.

Moderator

Thank you, Mr. Krishnan, for these insights. delightful remarks, especially around democratizing access to AI resources as well as keeping humans at the center. Now I would like to call upon Sri Amit Shukla, Joint Secretary, the Cyber Diplomacy Division from the Ministry of External Affairs. Can we have a round of applause for Mr. Shukla, please?

Amit Shukla

Sri S. Krishnan, Secretary, Ministry of Electronics and IT, my dear friend, Professor Ravindran, Excellencies, distinguished guests, ladies and gentlemen. Artificial intelligence has emerged today as an enabler for the welfare and progress of humanity. Whenever AI is deployed with purpose, it can catalyze economic growth and social empowerment. government for all. Yet, only countries with AI capabilities can reap actual AI benefits to their fullest potential. We must collectively address this anomaly and ensure that the benefits of AI is equitably shared. Else, this very revolutionary technology could only bring the widest unfathomable divide among countries. Countries, especially from the global south, face resources and access constraints. This inhibits their pursuit to harnessing AI for economic and development opportunities.

A collaborative international effort becomes highly relevant to bridge this emerging AI capacity divide. India, with this conviction, has been a strong proponent for international AI capacity building cooperation, especially for the global south. Our long -standing ITEC program is the testimony to this belief. Under the ITEC program, we have imparted training to thousands of officials from 160 countries since 1964. We have deployed our vast and rich network of institutions and training facilities for this purpose. Annually, around 10 ,000 fully funded in -person training opportunities programs are offered to nearly 400 courses at 100 eminent institutes in India. Some of these training courses are AI courses and we intend to expand this further. In this spirit, we stand with the initiatives of the United Nations and welcome the establishment of the Global Network of Centres for Exchange and Cooperation on Capacity Building.

The network would bring unique expertise and perspectives from different regions of the world. This diversity would only enrich the purpose of the network in its assessment of local AI capacity needs. The network must truly facilitate sharing of expertise, training use cases and developing infrastructure for countries. We have developed our expertise in successful, innovative AI technologies. . Our achievement on integration of DPI solutions and adoptions. into AI to leverage technology for social and economic progress could add value to the network. The AI capacity building models under India AI mission would be relevant for the network. I congratulate all the participating countries on launching of the framework for the network. As we stand today, we have 14 countries already nominating institutions.

These are Brazil, China, Ethiopia, Guinea, India, Kazakhstan, Kenya, Rwanda, Saudi Arabia, Senegal, Slovakia, South Africa, Trinidad and Tobago, and Vietnam. It is a matter of satisfaction that IT Madras from India took the initial steps in this endeavor. Let today’s steps of the network build tomorrow’s bigger strides. Thank you.

Moderator

Sorry, could I make a quick announcement to have all the panelists and the speakers on the stage for a quick photo? Mr. Shukla? We will have a quick photo opportunity with all of the featured speakers for this session and we will proceed with the panel right after. Thank you for your patience. Thank you. S. Krishnan and Joint Secretary Shri Amit Shukla. We will now proceed with the panel discussion. Thank you. I would now like to invite Dr. Mehdi Snen, a Senior Advisor to the UN Secretary General’s Tech Envoy, to please introduce the panelists and moderate the panel. Thank you.

Mehdi Snene

He’s coming back. Thank you so much for your excellencies setting the discussion regarding the Global Network for Centers of Exchange and Cooperation on AI Capacity Building. It’s truly my honor today to welcome these distinguished panelists today on the panel to talk about the network, explain how it started, where we are heading to, and then what are the biggest plans we have for the network. I’m Mehdi from the UN, and I’m today happy to have Sayyidina Musa from the UN. the Center of Senegal, Habib Abdurrahman, Abdurrahman Habib from Kingdom of Saudi Arabia, Fitsun from Ethiopia, and Dr. Ravi from IIT Madras. And I’ll start with a kind of chronological order of how we have set up the network that started with an initial call from the Kingdom of Saudi Arabia in Kenya during the General Assembly, calling for member states to join their effort to build a global network on AI capacity building that could truly leave no one behind, in particular in the needs of building the AI national strategy and building local and sovereign national AI capacities.

Dr. Abdurrahman, I’ll start with you as the chronology of the genesis of the network. I know that there is a lot of initiative coming. in the Kingdom of Saudi Arabia related to the AI. You are leading one of the centers that has been already established with UNESCO. I see that there is a lot of UN agencies also collaborating and cooperating on building this kind of network beyond the actual one. I would like to start by asking, how do you see the cooperation? You are leading one of the UNESCO centers in Saudi Arabia. How do you see the cooperation? How do you see the cooperation and diesel cooperation among the different networks, but also among the different networks held by different international organizations today?

Abdurrahman Habib

Thank you very much. Thank you for having us. It’s a pleasure to be here and seeing our friends. I’m so excited and happy to see the dream is coming true. A couple of years ago, we started multiple meetings saying we want to work on capacity. We think capacity is a very important thing. We think capacity is a very important thing. building is one of the most critical parts, and at the same time, it needs a lot of investment, and we need to come together to build it together. Long story short, we sat with multiple countries. I’m very proud that we and Kenya, especially Philip, Ambassador Philip, he’s not here, but we managed to put together the first meeting in order for us to talk about it.

It was very challenging at the beginning. That wasn’t part of the plan, wasn’t part of an official gathering, but we believe that capacity building needs to be a network. We need to work together, not scattered, and we need to support each other in programs. In Saudi, one of the strengths that we have in the past couple of years is actually capacity building, and that’s why we tried to show what’s been done in Saudi, especially for the Global South, for all of us today on the table, I believe. All of us are very proud of the work that we’re doing. have big population, young population, that they are eager to learn. I’ll just give an example.

When we started Women Elevate program at our UNESCO center, we thought Women Elevate program will show them how eager those students are. So the goal of Women Elevate is to empower ladies globally on AI by offering a training program for 25 ,000 ladies as a goal for three years. Only in the past year, we managed to finish 6 ,000. The number is not important. What’s actually important is to see what is the success rate of those students in the program. This is an online program, fully online. We provide, of course, mentorship, and we provide the support, and it finishes with a certificate from Microsoft AI 900. This is 26 hours of training. That’s about five to six weeks. More than… 89 % of the students are finishing the courses and getting the certificate.

Now 6 ,000 of them have done the program and almost the majority of them got the certificate. We’re talking about more than 86 countries this program covered. And we believe such programs and many other programs globally will be able to make a dent and change the future for so many of our citizens. And especially for women and I mean it for Global South. In Global South we look at technology differently than the northern part. Many of our colleagues and sisters they look at IT and STEM overall as the go -to major and the go -to place to learn and equip in technology. Therefore you will see that we have 29 thousand ladies registered in the program. Can you imagine that 29 ,000 ladies just since June want to continue and learn in this program, and we will be hopefully able to cater and finish this target and move to new targets as we go.

Not only that, but also we twisted the program a bit by offering the program for public servants. Unfortunately, we are only offering it for public servant ladies. So, for example, in Kenya, Philip managed to train the majority of his team and the working groups, more than 300 ladies now already trained in the foreign affairs in Kenya. And that’s what we want to see in delegation and many other programs, and that’s what we are hoping to achieve in the next couple of years. So I’m very excited and happy to see that dream came true, and we are in a network today where we will share programs, and we will hopefully share even more and more success. We will share this story as we go with our colleagues.

Thank you.

Mehdi Snene

Dr. Abdurrahman, thank you so much. This is impressive numbers you’ve articulated there, and I’m happy to count on your support. This is, as I said, this is a member -state -led initiative, and it’s proposing centers, as our officials from India already expressed. One of the first centers to join is, thanks to our Professor Ravi, a colleague now from the scientific panel, is IIT Madras. So you took the initiative of initiating that, so probably you really see the outcome and the value of that. Firstly, as a professor at IIT Madras in India, and secondly, as a scientific, joining the UN scientific panel, what do you really expect from the network? How do you see the value of that network?

Balaraman Ravindran

Great. So first off, I mean, I’m super thrilled that we are here now, that we have actually gotten this. moving and the thing with for India so for us it’s we are a country in multiple parts we are like a curate some parts of the country which is a lot of talent and other parts of the country where we really have to start building our own networks for doing this kind of capacity building and so we know the difficulty and value of making sure that the entire population is skilled at least AI literate and have the capacity to contribute meaningfully both to an AI enriched economy and as well as the AI development going forward and I believe this is a conversation that cannot be done within the country alone so we really need to get everybody on board and as an academic what I’m looking at is just taking all my other hats off just putting my teacher hat on I don’t even know how to teach anymore.

So that’s the truth, right? The skilling, the learning, the mechanisms, the facilities that are available, and even the training that the children who are… I teach at a university. I shouldn’t call them children anymore, but anyway. The students are going through when they come to us, right? It’s very different. There’s a lot much more self -learning. Students are more comfortable doing things on their own, and in fact, trying to force -fit them into a classroom setting is always challenging, right? But then, what I’m also seeing is that everybody, everybody wants to know AI. Everybody wants to use AI. And I think that’s correct. Not because I want more people to do AI research. I’m not looking for more grad students or research assistants, but it is because every walk of life is going to get influenced by AI.

So when we talk about capacity building in AI… It is not just capacity to do AI better, but capacity to use AI to do whatever you want to do better. And that, I think, is a global imperative. Everybody should know how to use this technology so that as a planet, as humankind, we are able to jointly elevate our worth. And so, as Professor Bengio was saying in the morning, we want everybody at the table. Well, nobody is the dinner. That was a very provocative statement. It leaves a powerful image in your mind. But I think that’s important. And what we are doing here is great for that. And now, putting on my panelist hat. Do I have a couple of more minutes for the panelist hat?

Do I have a couple of more minutes for the panelist hat? Okay, I’ll take that as an yes. I was not ready for that question. I’m not supposed to answer the questions. Sure. So from the viewpoint of the scientific panel, so the whole idea behind the scientific panel is to provide evidence -driven, science -based approach to the state of AI, the impact of AI, and the potential progress of AI in the coming times. So in that sense, unless we have meaningful engagement with the global majority, with everyone in the globe, it’s going to be futile trying to say that the panel is going to talk for the world at large. And for us to have that conversation, we need to make sure there is a sufficient amount of…

of expertise, sufficient capacity around the globe to engage in that conversation. So I think that is important. I’m pretty sure the panel had a tough time finding enough representation from the global south. Thank you so much. That you can answer. We need to get that. Yeah, true.

Mehdi Snene

Thank you so much, Professor Ravi, and thanks for your kind words. So started with Saudi Arabia and then India. Saudi Arabia offered the first center, and then Senegal offered the second center, and to host the second meeting of the network that happened at the end of January in Dakar, and our host today is with us. So, Sadie and I, you are a former UN Secretary General, a high -level advisory body on AI member, and among the recommendations, there was this network of capacity building. I recognize some other HLAB members sitting there. I don’t think in the room. so they will be watching you closely. I’ll give you my microphone, no worries. I’ll give you mine.

So when you’ve done this recommendation, you have the best view on what you expect from the network. We’ll make it, because we are running out of time. But please give us more clarification on the initial area and then the current implementation and where we are heading to.

Seydina Moussa Ndiaye

Thank you, Mehdi. I’m very excited to be here. As you say, the network of AI Capacity Building Center was one of the recommendations of the H -Lab. And the two first ones were the panel and the global dialogue. And as you say, the idea of the panel, was to have evidence on opportunities and challenges on AI to give to policymakers. And the dialogue… was to bring all countries together to have this dialogue around AI. But as you know, when we have all countries, there is this gap between countries. There is some countries who understand what’s going on and others who are here but don’t understand all the trends, all the risks, all the challenges, all the opportunities of AI.

So that’s why the network of capacity building on AI capacity building was also proposed to give the opportunity to help countries to have more understanding on AI and to build their own ecosystem. And with the network, obviously, now… being a reality. I think that what we have done since then is to adopt our cooperation framework. We begin the work in India here with IIT Madras, and the cooperation framework is now adopted during the workshop in Dakar, where we had, I think it was six centers which adopted it. And we talked about what could be the way of doing things within the network. We worked on an offer sheet so that each network who came, each center, who came in the network can offer some services to the network.

And we are still working on stabilizing the office sheet. And the next step will be to have a blueprint because it’s important to help also countries which haven’t a center yet to build a center. So we will have a blueprint on how to build a center. I think that we asked Audet to do the first draft. And we worked on a couple of activities we can do. And one of the main project is to have a capacity building. And I think we will work on it with Abib and so on. And we try to have all the big projects. Multi -country projects. so we can work together and help each other. And the next step will be to have perhaps a third meeting.

Habi was talking about having it in Riyadh, I think. So perhaps it will be at Riyadh before the summit on July.

Mehdi Snene

Excellent. This is excellent news. Excellent news. So our centers get prepared to come to Riyadh. Exciting city. I’ve been there recently. Good. So Kingdom of Saudi Arabia started the initiative, India first center. Africa was strongly represented by Dakar as a second host. And then we got among the first cohort of centers that joined are Ethiopia. Dr. Fitzsimmons was joining us in IAT Madras and then in Dakar. With that, we’re going to wrap up the session. with a strong enthusiasm regarding the center. As a center who joined, not building the initiative, but joined it, you have for sure seen something within that initiative that attracts you. And I’m sorry we are running out of time, so maybe we have two minutes, but I want to listen from one of the first centers to join the cohort.

Why did you join the network?

Fitsum Assamnew Andargie

Thank you, Mehdi, and I’m very happy to be here and also very happy to be part of the network. So I’ll tell you how we got into this. So I’m part of a network of African labs supported by IDRC called AFRD Labs. And we saw that. Like, there is a need for collaboration across Africa. to develop our AI capacity. And we were introduced to this new program that we thought that would actually help us in actually creating the network. And in fact, joining the network would help us, want each other to develop our capacity and lean on our neighbors not to be left behind. For example, in Ethiopia, there is a huge investment in AI by the establishment of the AI Institute.

And it was responsible for developing policy, developing strategy, and also supporting capacity building. And from a university, like the university itself, started thinking about AI and started its own policy in the way like… education is delivered. And then an AI course was built, developed, and when we looked at this, we still are left behind. We need to become more competitive. And for that, we need the capacity building. So this network provides us opportunity. Not only that, we can also help others because we understand the context, the local context. The problems we faced when trying to establish our own centers. And our discussions actually helped us understand that, oh, okay, we are in it together, so we can help each other get there.

So that’s why, actually, we were very… We were very enthusiastic. And the government was very enthusiastic about saying, oh, okay, we should join this network. Thank you.

Mehdi Snene

Excellent. Thank you so much. Again, so we heard a lot from the investigators, principal investigators, the designers, the participants, all the enthusiastic centers about the network. Now, in a very short answer, I’ll get back to all of you. How do you see the network in the next five years? Meaning we have at the UN the 2030 SDG goal. In 30 seconds, if you can do it, how do you see the network in 2030 contributing to that? Or where do you see the network in 2030? Dr. Habib, please.

Abdurrahman Habib

Okay. Thanks, Mehdi. In 2030, I think that the dialogue is here. If the network. Work well. we will have a meaningful dialogue. It’s not only some countries who will lead the discussion, but all countries in the world, I think. I believe that the network will grow exponentially. We’re a small number now, and we already grew exponentially in the past couple of months. This will continue as a trajectory for quite some time. But what’s more important is that the platform is there now. So we can share experiences and share programs in a way that we didn’t have before. And by doing so, I believe that will also contribute to our beneficiaries, whomever they are, and we’ll receive more and more training and more capacity will be built in that program.

Thank you.

Fitsum Assamnew Andargie

Thank you. In five years, where I see the network is that it could have… distribution of capacity. When I say that it’s not only the compute power, but the human power as well. We will have no one left behind, which means you have people that can do research and generate new knowledge, and people that can use AI and develop their livelihoods. That’s where I see the effect of this is going to be for all these countries involved. Thank you.

Mehdi Snene

Dr. Ravi.

Balaraman Ravindran

has the categorization of countries as to how ready they are with regard to AI. So five years from now, I wish the network would have contributed to such an extent that the UN would have to redo the categorization. So that they have to take the topmost level and start splitting it into four as opposed to having four levels of AI readiness. So everybody is at the top level as we imagine it now. And then we’ll go on from there. Thank you so much, Ravi. The floor is yours, yours, Chair MC.

Moderator

Thanks for all the panelists and thanks for joining us for this short discussion. Thank you, panelists, for that insightful discussion. Before we proceed with the closing remarks, I’d like to remind the audience that we will soon have with us Sri A. Revan Thretty, the Honorable Chief Minister of Telangana, a state that’s emerged as one of the leaders in industrial innovation and technology -led governance. And he will be presenting a keynote address on AI and cybersecurity, Harnessing AI Power in the State’s Growth. Those who would like to stay back for that session, please be seated. Those who would like to leave after the ODIT session, please use this door. Thank you so much. Thank you. That was a really insightful panel which spoke about the needs of the Global South, capacity building for women and youth particularly.

And now moving on, we would like to invite Mr. Vilas Thar, President Patrick J. McGowan Foundation, for his keynote address. May I please request Mr. Thar to come on stage?

Vilas Dhar

Thank you so much and good afternoon, everyone. What an exciting conversation that wraps together so much of what we’ve heard over the summit. I want to acknowledge Your Excellencies, our friends here in the room, and I want to take this few minutes to share with you three ideas that directly connect to the network we’re building. The first is we’re in a time when innovation in technology seems like it’s moving so quickly. but I have to ask where is the innovation in the institutions that will guide what the future of AI looks like? And I think there is a matter of timing that’s quite interesting. In many ways it feels like no country is as far ahead on this institutional work as they might hope but neither is any country so far behind that they feel like they’re totally out of the race.

This network gives us an ability to build the institutions that will guide what the AI future looks like. And in that I think is the second opportunity. When we begin to think about what it will look like to build collaboration across countries, across sectors, across topics, I think it’s fair to say that we will not look to the private sector to define that conversation for us. It will require a different model. One that brings governments to play to set policy that allows us to collaborate with the sharing of data. With the idea that compute, even as much as we want to talk about it being sovereign, will have regional centers of excellence and that we need to build ways to collaborate around how we can collaborate.

And I think that’s why we share those resources. resources, that talent flows in this modern world, and that we need the institutions that will let us share our best practices. And third, and maybe most importantly, at a time when AI governance is the topic of the moment and everybody has a new framework, a framework that’s grounded in deep process and practice but still exists only as a framework, we need the institutions that will turn frameworks into practice, that’ll build the muscle memory of collaboration, that’ll actually tell us what it looks like to sit down and negotiate the complexity of ensuring common cyber defense, of sharing data, of building algorithms around agricultural practices that transcend geographies and local weather patterns, that allow us to abstract the underlying knowledge that drives these algorithmic designs and make sure that we can apply them in each place as needed.

That governance is a matter of muscle memory. It’s a matter of practice and it’s a matter of choice. Now, these are three observations that guide why we came to the original idea of creating this network to begin with. I want to acknowledge my colleagues here in the front row from the UN Secretary General’s high -level advisory body, a group that came together with scientific expertise and policy expertise from around the world to set forward a set of recommendations that didn’t just focus on capacity building, but also on the frameworks of global governance at scale. And I want to acknowledge the countries that led on the Global Digital Compact, the first new major multilateral institutional framework for how we might think about issues of interconnectedness in a digital world.

And I want to acknowledge the countries that came forward to really put this initiative together, starting first, of course, with our dear friends from Saudi Arabia and from Kenya, with the incredible work of India here and Senegal and the work that will continue. But I want to acknowledge that even when it often feels like this work happens in abstraction, that it happens in international agreements, in national coordination, at the heart of where this work happens is in the digital world. And I want to acknowledge the people, the scientists, the people that are involved in the civil society advocates, the private sector entrepreneurs who are building this at scale. And so let me conclude then on this point, that even as we come to the end of this incredible summit, as we’ve heard from so many, as we’ve heard both proclamations from the stage, and maybe more quietly, behind closed doors, the work that’s happening when people come together to ask a simple question, how can I help?

How can I be involved? That we ensure that we open the doors of transparency, that we allow for participatory mechanisms, that we ensure that we hold not just our values around what technology should look like, but what our society should look like as it enables these technologies. That we continue to enforce a basic adherence to questions like, are we ensuring diversity in participation? Are we ensuring that the next time we hold a conversation like this, we’ll see an equal number of men and women leading centers around the world on AI? That the students who are represented at institutions like this, show a diversity of thought. That we’re ensuring that we’re investing in the rights and norms and values and principles that should guide international collaboration.

And that the centers like the ones that are represented here today will be the vanguard of a global network that sits above and beyond where private sector innovation and frontier models sits, but rather innovates towards the kind of society that we all aspire to. One where these tools are used to enable our common purpose. One where India leads, but so too does Senegal and Kenya. So does Trinidad. So does Chad. So does the United States. And so does Saudi Arabia, a world where we come together to define what our common vision might look like when AI enables our very best. I want to thank you all. I want to thank the incredible center chairs that we had.

And I want to see us all come together in this work. Thank you.

Moderator

Thank you, Mr. Govan, for those powerful reflections, particularly emphasizing the need for diversity in participation. I’m now very excited to announce our final speaker, Her Excellency, Ms. Anne Melgaard, the tech ambassador from Denmark. But I also would like to make a short announcement that there will be a short intervention after this by the International Center for Global Innovation. Ambassador for Technology and Innovation from the Government of Brazil, Mr. Eugenio Garcia as well. so but please my honor to welcome miss Melgar

Anne Marie Engtoft Meldgaard

good afternoon everyone Vilas you’re such a hard act to follow I love this idea of Muslim memory let me congratulate the four gentlemen that was on the stage before I am so impressed and I’m almost a little scared at the scale of the progress that you have already created with this network in such a short amount of time in my home region in the European Union and in Europe it would take a little longer for us before we find the format and the framework and then we make it into London maybe in a few years would actually be able to do what you’ve been doing in such a short amount of time congratulations the global digital divide we’ve been speaking lengthy about it this last week it is still a huge challenge and to the global dissemination to a true democratization of this technology of a meaningful axis around AI when 34 countries are in the global digital divide we are able to do what you have been doing and we are able to do what you have been doing and we are able to do what you have been you have been doing and we are able to do what you have been doing and we are able to are the only ones that have the world’s compute, it becomes really, really challenging.

But what I think this network is doing, it is shining a light that goes beyond these traditional divides that we see in the infrastructure. It is around upskilling and reskilling. And I actually believe that we have more in common between the global north and the global south. And I think we can learn a lot more from each other when it comes to upskilling and reskilling. And that’s why this network is such an important and I think landmark piece for the AI puzzle to be solved. I want to end, I want to make this short, and I want to end on why I think this is important. A dear friend of mine, he has a framework for talking about meaningful coexistence with technology.

And it requires four things, four ingredients. First of all, identity. How to remain human in a technological world. It seems like a stupid, obvious question, right? But I think many of us are feeling the sense that I’m losing a little bit. that we need to be more inclusive. And I think that’s why I’m saying that we need to be more inclusive. me being a human being. My identity as an individual, as a Dane, as a woman, whatever your identity might be in a world where technology is taking over, how to make that persistent, that we have that sense of identity. That is part of being skilled to take the right decisions. The second one is around community.

In a time of increasing technology, we need more community, not less. This gathering could have been a Zoom meeting, but nevertheless, thousands of us travel from all over the world, spending time in here with too much air conditioning and out in traffic with too much traffic. Why? Because of the human connection. Because of the impromptu meeting, the inspiring speeches, but also the people you meet when you’re in the coffee line. Those who are inspiring, and that community is being built, and that’s why these AI summits work. That’s why the communities that we’re part of, they cannot be solely put together. They need to be put into a digital world. They need to present. Then there’s agency.

In a more agentech world, we need more agency, not less. I think many of the people that you meet, maybe your families, your communities, the citizens that you represent, if you’re a lawmaker and policymaker, the feel of agency, of actually having a say in how this is unfolding, is minimized. And this is another place where reskilling, skilling comes up, having the right tools to be part of that. And then finally, about purpose. How often do we ask ourselves, what is the purpose of this technology? There’s a sense that I would love for the AI to empty the dishwasher while I write poetry and I play with my kids. But right now, we’re in a trajectory where I am emptying the dishwasher while the AI is playing with my kids and writing poetry.

If we do not insist on having the questions around, what is the purpose of that technology? And if we do not skill our citizens, ourselves, in being able to ask, what do we collectively want? What do we want out of this technology? We’re going to get technology that we serve. the other way around. And so congratulations on this incredible network. I hope to be a stronger partner of it, but right now you are shining a light on necessary peace for a more meaningful coexistence with technology. Thank you.

Moderator

Thank you, Excellency, and it’s always a pleasure to see a woman in the room speaking on this subject. I’d now like to request Ambassador Garcia to quickly make his intervention. Thank you.

Eugenio Garcia

Thank you. I’ll be very brief since I was not in the program, but just to say that Brazil fully supports this global network of the United Nations on capacity on AI and capacity building. I think it’s very well known and remembered yesterday. President Lula from Brazil. He mentioned specifically in his statement that the role of tonight’s nations is key for an international governance of AI and we need to come to the defense of the multilateral system. It is important that we can we do this together so we’ll be working. We have two institutions, two universities from Brazil. They are already joining this network. One from the northeast of Brazil, Federal University of Pernambuco and also from the south in Brazil which the Federal University of Rio Grande do Sul.

So these two institutions are already collaborating with the network. Of course maybe in the future others could also join but just to say that this network will complement very well. the AI track of the Global Digital Compact, both the scientific panel and the global dialogue. And I think if we can strengthen multilateralism, I think that’s the way to go, and we can count on our support. Thank you so much.

Moderator

Thank you, Ambassador. In the interest of time, I’d just like to thank the speakers, the panelists, and the audience. I hope they enjoyed this insightful session, and we look forward to more news on this network. And thank you, everyone. We now move on to the next session. Thank you so much. Thank you, speakers. May I remind the audience that we now have with us Sri A. Raven Threaty, the Honorable Chief Minister of Telangana, for a session on Harnessing AI Power in the State’s Growth, a keynote address on AI and cybersecurity, we would encourage the audience to please stay back for the session. Thank you. those who choose to leave may please do so through the door on my left.

Thank you very much.

Related ResourcesKnowledge base sources related to the discussion topics (13)
Factual NotesClaims verified against the Diplo knowledge base (4)
Additional Contexthigh

“India’s higher‑education department is integrating AI into every university programme, and the school‑education department will introduce AI from the third‑grade level.”

The knowledge base notes that India is working to integrate AI across school and higher-education systems and to expand AI projects throughout the education sector, though it does not specify the exact grade level or that every university programme will include AI [S84] and [S85] and [S86].

Confirmedhigh

“India’s ITEC programme, running since 1964, has trained thousands of officials from 160 countries and now offers around 10 000 fully‑funded in‑person courses annually, including AI modules that will be expanded.”

The knowledge base confirms that the ITEC programme has trained thousands of officials from 160 countries since 1964 and provides about 10 000 training opportunities each year [S1].

Confirmedmedium

“The Chief Minister of Telangana would deliver the upcoming keynote on AI & cybersecurity.”

A source records that the Chief Minister of Telangana gave a keynote on artificial intelligence, confirming the AI focus of the address, though it does not mention cybersecurity specifically [S92].

Additional Contextmedium

“AI is a catalyst for welfare and economic growth, and without coordinated action the technology could widen the global divide, especially for the Global South.”

Other UN-related documents describe AI as a tool for development that can help reduce disparities if deployed equitably, providing background for the claim about growth potential and risk of widening gaps [S65] and [S90].

External Sources (93)
S1
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — -Moderator- Session moderator (role/title not specified) -Vilas Dhar- President, Patrick J. McGowan Foundation
S2
https://dig.watch/event/india-ai-impact-summit-2026/building-public-interest-ai-catalytic-funding-for-equitable-compute-access — We have Vilas Dhar , president of the Patrick J. McGovern Foundation. Vilas serves on the UN Secretary General’s High -L…
S3
A Digital Future for All (afternoon sessions) — – Vilas Dhar – President and Trustee, Patrick J. McGovern Foundation Vilas Dhar: I mean, we assume that inertia is the…
S4
Leaders TalkX: Partnership pivot: rethinking cooperation in the digital era — Anne Marie Engtoft Meldgaard, Technical Ambassador from Denmark’s Ministry of Foreign Affairs, advocated for meaningful …
S5
Global challenges for the governance of the digital world — This engaging panel featured a diverse range of voices, including Anne-Marie Engelth-Melgaard as the Danish TAC Ambassad…
S6
AI Meets Cybersecurity Trust Governance &amp; Global Security — -Anne Marie Engtoft- Technology Ambassador, Ministry of Foreign Affairs of Denmark
S7
Why science metters in global AI governance — -Balaraman Ravindran- Professor at IIT Madras, member of International Independent Scientific Panel
S8
Towards a Safer South Launching the Global South AI Safety Research Network — – Dr. Balaraman Ravindran- Dr. Urvashi Aneja
S9
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — – Balaraman Ravindran- Abdurrahman Habib – Balaraman Ravindran- S. Krishnan
S10
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — -Fitsum Assamnew Andargie- Representative from Ethiopia, part of AFRD Labs network
S11
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — – Balaraman Ravindran- Abdurrahman Habib – Fitsum Assamnew Andargie- Abdurrahman Habib
S12
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-ai-cybersecurity-_-india-ai-impact-summit — He’s coming back. Thank you so much for your excellencies setting the discussion regarding the Global Network for Center…
S13
Panel #1 : « La gouvernance du numérique au service de l’inclusion : enjeux, freins, et opportunités » — -Seydina Ndiaye: Entrepreneur et enseignant, membre du comité consultatif de haut niveau sur l’IA des Nations Unies, ges…
S14
Open Forum #48 Implementation of the Global Digital Compact — – **Eugenio Garcia**: Director for Science, Technology, Innovation and Intellectual Property, Government of Brazil, form…
S15
Open Forum #71 Advancing Rights-Respecting AI Governance and Digital Inclusion through G7 and G20 — – **Eugenio Garcia** – Director of Science, Technology, Innovation and Intellectual Property at the Ministry of Foreign …
S16
Eugenio Vargas Garcia — Eugenio Vargas Garcia has 30 years of professional experience in foreign policy and diplomacy. He holds a PhD in History…
S17
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -S. Krishnan- Role/Title: Secretary of METI (Ministry of Electronics and Information Technology)
S18
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-ai-cybersecurity-_-india-ai-impact-summit — Sri S. Krishnan, Secretary, Ministry of Electronics and IT, my dear friend, Professor Ravindran, Excellencies, distingui…
S19
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — Sorry, could I make a quick announcement to have all the panelists and the speakers on the stage for a quick photo? Mr. …
S20
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — – Amit Shukla- Abdurrahman Habib- Seydina Moussa Ndiaye- Mehdi Snene- Eugenio Garcia – Amit Shukla- Fitsum Assamnew And…
S21
S22
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S23
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S24
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S25
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — – Amit Shukla- Abdurrahman Habib- Seydina Moussa Ndiaye- Mehdi Snene- Eugenio Garcia – Amit Shukla- Fitsum Assamnew And…
S26
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-ai-cybersecurity-_-india-ai-impact-summit — Sorry, could I make a quick announcement to have all the panelists and the speakers on the stage for a quick photo? Mr. …
S27
Keynote Adresses at India AI Impact Summit 2026 — -S. Krishnan- Secretary (India)
S28
WS #462 Bridging the Compute Divide a Global Alliance for AI — ### Balancing Efficiency and Equity ### Immediate Opportunities ### Infrastructure and Investment Challenges ### Skil…
S29
Open Forum #33 Building an International AI Cooperation Ecosystem — Qi Xiaoxia: Thank you, Professor, distinguished guests, ladies and gentlemen, friends, good afternoon. I’m delighted to …
S30
Keynote-Ankur Vora — “Technologists can choose whether we use AI to take on the world’s greatest challenges or just the most precious.”[1]. “…
S31
Meet&amp;Greet for those funding Internet development | IGF 2023 Networking Session #111 — In conclusion, the White Project, led by Professor Jim Ryan, is a reputable research consortium that conducts research a…
S32
AUDA-NEPAD White Paper: Regulation and Responsible Adoption of AI in Africa Towards Achievement of AU Agenda 2063 — A well-structured redistribution of unpaid childcare and housework could help strengthen women’s participation in the di…
S33
Agenda item 6: other matters — Ethiopia: Thank you, Mr. Chairperson, for giving me the floor. Allow me to share some of the efforts made by governmen…
S34
New AI innovation hub aims to position Ethiopia as regional leader — Ethiopiahas launcheda new Artificial Intelligence University Innovation Pod in Addis Ababa, marking a significant step i…
S35
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — Yeah, I think I just want to add some echo to Professor Gong’s comments. I think it’s not necessarily a negative effect,…
S36
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — In conclusion, while technology has played a valuable role in education, it is important to address the challenges posed…
S37
Governments urged to build learning systems for the AI era — Governments are facing increasedpressureto govern AI effectively, prompting calls for continuous institutional learning….
S38
WS #100 Integrating the Global South in Global AI Governance — Fadi Salim: of our panel. My name is Fadi Salim. I’m the Director of the Policy Research Department at the Mohammad …
S40
How AI Is Transforming Indias Workforce for Global Competitivene — “go beyond top tier institutions to tier two.”[144]. “that’s how it become more inclusive and I think this has to be a h…
S41
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Alain Ndayishimiye:Yes thank you moderator once again let me take the opportunity to greet everyone whatever you are in …
S42
Global AI Policy Framework: International Cooperation and Historical Perspectives — The scientific panel will provide evidence-based policy assessments, whilst the global dialogue will enable multilateral…
S43
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Alex Moltzau: Yes, thank you so much. My name is Alex Maltzau. And I work as a second national expert in the European AI…
S44
International Cooperation for AI &amp; Digital Governance | IGF 2023 Networking Session #109 — It is crucial to strike a balance and ensure that there are no overlaps or conflicts between these regulations. Collabor…
S45
Digital Governance 3.0 — This framework will be further discussed at the Summit of the Future, where the GDC will be officially adopted. In summa…
S46
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — A collaborative international effort becomes highly relevant to bridge this emerging AI capacity divide. India, with thi…
S47
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Alain Ndayishimiye:Yes thank you moderator once again let me take the opportunity to greet everyone whatever you are in …
S48
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Development | Legal and regulatory Evidence-Based Policymaking and Research Integration Part of the roadmap emphasizes…
S49
Policy Network on Artificial Intelligence | IGF 2023 — Audience:Hi, Ansgar Kuna from EY. In AI, as with a number of these digital technologies that are arising, we’re seeing a…
S50
Global AI Policy Framework: International Cooperation and Historical Perspectives — The speakers demonstrate significant consensus on key principles including the need for inclusive governance, building o…
S51
Using AI to tackle our planet’s most urgent problems — High consensus level due to the presentation format with one primary speaker and supportive moderator. The implications …
S52
Leveraging the UN system to advance global AI Governance efforts — Equally, there’s an emphasis placed on the benefits of collaboration and teamwork. The analysis proposes that cooperativ…
S53
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S54
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — Another important point emphasized in the analysis is the significance of involving users and technical experts in the p…
S55
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — Abhishek Singh:I can take that, no worries. Thank you, Abhishek. The floor is yours. You can give your question. Yeah, t…
S56
UNESCO Recommendation on the ethics of artificial intelligence — 87. Member States should ensure that the potential for digital technologies and artificial intelligence to  contribute t…
S57
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — The tone was pragmatic and solution-oriented throughout, with speakers acknowledging both challenges and opportunities i…
S58
Scaling AI for Billions_ Building Digital Public Infrastructure — High level of consensus with strong implications for coordinated action. The agreement across diverse stakeholders sugge…
S59
AI as critical infrastructure for continuity in public services — “I believe that there is perhaps awareness challenge as well as the capacity challenge, because I think that this whole …
S60
AI, Data Governance, and Innovation for Development — The overall tone was optimistic and solution-oriented, with speakers focusing on practical ways to overcome obstacles th…
S61
Smart Regulation Rightsizing Governance for the AI Revolution — Her approach to capacity building went beyond traditional training programs to emphasize shared evidence, performance be…
S62
Agenda item 5 : Day 4 Afternoon session — Gender perspectives and inclusivity in capacity building efforts were recognized as crucial elements by the delegates. T…
S63
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Key to this trajectory are collaborative and inclusive policy governance, culturally attuned ethical frameworks, and bro…
S64
Responsible AI for Shared Prosperity — Social and economic development Social and economic development | Artificial intelligence
S65
High Level Dialogue with the Secretary-General — He mentions the potential of artificial intelligence as a tool for development if used equitably.
S66
Open Forum #67 Open-source AI as a Catalyst for Africa’s Digital Economy — Development | Sociocultural Fair, enabling, risk-conscious, and equitable regulation requires collaborative partnership…
S67
A Digital Future for All (afternoon sessions) — There is a need to build AI capacity in developing countries to ensure they can participate in and benefit from AI advan…
S68
How AI Is Transforming Indias Workforce for Global Competitivene — “go beyond top tier institutions to tier two.”[144]. “that’s how it become more inclusive and I think this has to be a h…
S69
ITU launches global AI Skills Coalition to bridge expertise gap in developing nations — The International Telecommunication Union (ITU) haslaunchedthe AI Skills Coalition, a global initiative backed by 27 org…
S70
Empowering India &amp; the Global South Through AI Literacy — Artificial intelligence | Capacity development | Social and economic development
S71
Welfare for All Ensuring Equitable AI in the Worlds Democracies — “This year, we upscaled 5 .6 million Indians, and so we actually doubled that commitment to 20 million people by the end…
S72
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — The network has achieved significant early progress, with 14 countries nominating institutions: Brazil, China, Ethiopia,…
S73
Artificial intelligence (AI) – UN Security Council — During the9821st meetingof the Artificial Intelligence Security Council, a key discussion centered around whether existi…
S74
International Cooperation for AI &amp; Digital Governance | IGF 2023 Networking Session #109 — Liming Zhu:All right, thanks very much for having me. Right, so I’m a professor from the University of New South Wales, …
S75
Open Forum #33 Building an International AI Cooperation Ecosystem — Qi Xiaoxia: Thank you, Professor, distinguished guests, ladies and gentlemen, friends, good afternoon. I’m delighted to …
S77
Global AI Policy Framework: International Cooperation and Historical Perspectives — The scientific panel will provide evidence-based policy assessments, while the global dialogue will enable multilateral …
S78
First round of informal consultations with member states, observers and stakeholders (2024) — This vision aligns with the principles of the UN Charter and supports the Sustainable Development Goals of the 2030 Agen…
S79
How to ensure cultural and linguistic diversity in the digital and AI worlds? — His perspectives and insights contribute to a global dialogue that intersects with multiple sustainable development goal…
S80
(Interactive Dialogue 2) Summit of the Future – General Assembly, 79th session — Julius Maada Bio emphasized the need for a more representative, equitable, and transparent UN Security Council, highligh…
S81
WS #204 Closing Digital Divides by Universal Access Acceptance — Roonjha Qaisar: Thank you so much. I really appreciate being here on this screen. It’s wonderful to see a lot of excitin…
S82
Opening address of the co-chairs of the AI Governance Dialogue — Majed Sultan Al Mesmar: Bismillah ar-Rahman ar-Rahim. Excellencies, distinguished guests, colleagues, friends, As-salamu…
S83
Ethical AI_ Keeping Humanity in the Loop While Innovating — Hello, good afternoon. So we’ll try to have this session very dynamic because it’s after lunch, it’s Friday, over five d…
S84
AI 2.0 The Future of Learning in India — Integration of school and higher education systems is essential, with universities reaching out to schools for better co…
S85
India outlines plan to widen AI access — India’s government has set out plans todemocratiseAI infrastructure nationwide. The strategy focuses on expanding access…
S86
TIMELINE — This strategy will integrate artificial intelligence technologies into the field of education through projects aimed at …
S88
https://dig.watch/event/india-ai-impact-summit-2026/building-indias-digital-and-industrial-future-with-ai — Today’s mobile networks are becoming intelligent, programmable and trusted layers of the national infrastructure. and th…
S89
Dedicated stakeholder session (in accordance with agreed modalities for the participation of stakeholders of 22 April 2022)/OEWG 2025 — The Chair’s closing remarks emphasized the positive progress made in CBMs and capacity building while stressing the need…
S90
Is AI a catalyst for development? — The Economist argues that AI has the potential to revolutionise developing countries by transforming their economies and…
S91
DRAFT AUGUST, 2024 — AI must benefit humanity. This requires a prudent balance of policies to tap AI’s potential while reducing its risks. To…
S92
Keynote Address_Revanth Reddy_Chief Minister Telangana — Artificial intelligence | Financial mechanisms
S93
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — Speaker 2 formally welcomes the next presenter, thanks the current speaker for his remarks, and introduces Mr. Naveen Ti…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
S. Krishnan
1 argument164 words per minute144 words52 seconds
Argument 1
Nationwide AI curriculum and workforce retraining
EXPLANATION
The speaker outlines a comprehensive plan to embed AI education across all levels of the education system and to retrain the existing workforce. This aims to ensure that every student and employee gains basic AI knowledge and skills.
EVIDENCE
He states that industry bodies are working on retraining, that AI will be taught across all university courses through the higher education department, and that school children will start learning AI from third grade, emphasizing inclusive AI education for the next generation and current workers [1-4].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel discussion notes industry bodies working on retraining and the higher education department ensuring AI is taught across all university courses, confirming a nationwide curriculum plan [S1]. Krishnan’s keynote further outlines this strategy [S27], and related remarks on upskilling the next-gen workforce appear in the AI-powered chips and skills briefing [S17].
MAJOR DISCUSSION POINT
National AI education strategy
AGREED WITH
Amit Shukla, Vilas Dhar
DISAGREED WITH
Amit Shukla, Abdurrahman Habib, Fitsum Assamnew Andargie
A
Amit Shukla
3 arguments140 words per minute440 words188 seconds
Argument 1
ITEC program delivering AI training to officials from 160 countries
EXPLANATION
The speaker highlights India’s long‑standing ITEC programme as a vehicle for capacity building, noting that it has trained thousands of officials from many countries and now includes AI courses. This demonstrates India’s commitment to sharing expertise globally.
EVIDENCE
He mentions that the ITEC programme has imparted training to thousands of officials from 160 countries since 1964, offering around 10,000 fully funded in-person training opportunities annually across 400 courses at 100 institutes, some of which are AI courses, with plans to expand further [20-24].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The India AI Impact Summit panel highlights the ITEC programme’s history of training thousands of officials from 160 countries since 1964, with around 10,000 fully funded opportunities annually, confirming the claim [S1].
MAJOR DISCUSSION POINT
International AI training outreach
DISAGREED WITH
S. Krishnan, Abdurrahman Habib, Fitsum Assamnew Andargie
Argument 2
Creation of Global Network to bridge AI capacity divide
EXPLANATION
The speaker calls for a collaborative international effort to address the disparity between countries that have AI capabilities and those that do not. Establishing a global network is presented as a solution to ensure equitable AI benefits.
EVIDENCE
He notes that only countries with AI capabilities can fully reap AI benefits, warning that without collective action the technology could widen global divides, and proposes a collaborative international effort to bridge the emerging AI capacity divide [13-19].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Global Alliance for AI discussion on bridging the compute divide emphasizes the need for a coordinated network to address AI capacity gaps worldwide [S28]. An open forum on building an international AI cooperation ecosystem also underscores the importance of such a global network [S29].
MAJOR DISCUSSION POINT
Bridging AI capacity gaps
AGREED WITH
Balaraman Ravindran, Seydina Moussa Ndiaye, Vilas Dhar, Mehdi Snene
DISAGREED WITH
Balaraman Ravindran
Argument 3
AI as an enabler for welfare; need equitable benefit sharing
EXPLANATION
The speaker frames AI as a catalyst for economic growth and social empowerment, but stresses that its benefits must be shared fairly among nations. Without equitable distribution, AI could exacerbate existing inequalities.
EVIDENCE
He describes AI as an enabler for welfare and progress, capable of catalyzing economic growth when deployed with purpose, and warns that only AI-capable countries will reap full benefits unless the advantages are shared equitably, otherwise the technology could deepen divides [11-16].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Krishnan’s remarks at the summit describe AI as an enabler for welfare and stress that benefits must be shared equitably to avoid widening inequalities [S12].
MAJOR DISCUSSION POINT
Equitable AI benefits
AGREED WITH
S. Krishnan, Vilas Dhar
A
Abdurrahman Habib
3 arguments152 words per minute817 words321 seconds
Argument 1
Women Elevate program delivering AI certification to thousands of women
EXPLANATION
The speaker presents the Women Elevate initiative, which aims to empower women globally by providing AI training and certification. The program has already reached thousands of participants across many countries.
EVIDENCE
He explains that the program set a goal of training 25,000 women over three years, achieved 6,000 completions in the past year with an 89 % completion rate, offering a 26-hour online course that awards a Microsoft AI-900 certificate, and has reached participants in over 86 countries, with 29,000 women registered since June [72-84][85-90].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Women Elevate initiative’s scale-targeting 25,000 women, with 6,000 completions and 29,000 registrations since June-is detailed in the summit panel summary [S1] and reiterated in the broader discussion of capacity building [S12].
MAJOR DISCUSSION POINT
Women’s AI capacity building
AGREED WITH
Vilas Dhar, Moderator, Anne Marie Engtoft Meldgaard
DISAGREED WITH
S. Krishnan, Amit Shukla, Fitsum Assamnew Andargie
Argument 2
Sharing of programs and expertise through the network
EXPLANATION
The speaker emphasizes that capacity building should be collaborative, with countries sharing programs, expertise, and best practices through the network. This collective approach is portrayed as essential for scaling impact.
EVIDENCE
He notes that capacity building requires joint investment and cooperation, stating that “we need to work together, not scattered, and we need to support each other in programs,” and later adds that the network will enable sharing of programs and success stories [68-71][94-95].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Capacity-building participants emphasize collaborative sharing of programs and best practices through the network, as highlighted in the summit’s discussion on joint investment and cooperation [S12].
MAJOR DISCUSSION POINT
Collaborative program sharing
Argument 3
Targeted AI training for women and public‑servant females
EXPLANATION
The speaker details how the Women Elevate program specifically includes public‑servant women, illustrating a focused effort to upskill female government employees. This targeted approach seeks to strengthen gender representation in the public sector.
EVIDENCE
He cites the program’s goal of training 25,000 women, reports that 6,000 women have completed it, and highlights that in Kenya, more than 300 female foreign-affairs staff have been trained through the initiative, demonstrating a focus on public-servant females [71-74][90-93].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Women Elevate programme specifically includes public-servant women, such as the 300+ female foreign-affairs staff trained in Kenya, confirming the targeted approach [S1].
MAJOR DISCUSSION POINT
Gender‑focused public‑sector training
F
Fitsum Assamnew Andargie
2 arguments101 words per minute365 words214 seconds
Argument 1
Ethiopia’s AI Institute and national AI policy driving local capacity
EXPLANATION
The speaker describes Ethiopia’s investment in an AI Institute that formulates policy, strategy, and capacity‑building activities, illustrating a national commitment to AI development. This institutional foundation is presented as a model for other countries.
EVIDENCE
He mentions a “huge investment in AI by the establishment of the AI Institute,” which is responsible for developing policy, strategy, and supporting capacity building, and notes that a university has created an AI course, underscoring the country’s holistic approach [196-199].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ethiopia’s new AI Innovation Hub and the Ethiopian Artificial Intelligence Institute are cited as national efforts to develop policy, strategy, and capacity-building activities [S34]; the country’s agenda item also references these initiatives [S33].
MAJOR DISCUSSION POINT
National AI institutional framework
DISAGREED WITH
S. Krishnan, Amit Shukla, Abdurrahman Habib
Argument 2
Distribution of human and compute capacity across member states
EXPLANATION
The speaker envisions a future where both computational resources and skilled people are evenly distributed among participating countries, ensuring that no nation is left behind in AI research or application. This reflects a holistic view of capacity building.
EVIDENCE
He states that in five years the network will enable distribution of capacity, not only compute power but also human power, allowing people to conduct research, generate knowledge, and develop livelihoods across all member states [230-232].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Global Alliance discussion on bridging the compute divide calls for equitable distribution of both compute power and skilled personnel across nations [S28], aligning with Andargie’s vision.
MAJOR DISCUSSION POINT
Equitable AI resource distribution
AGREED WITH
S. Krishnan, Balaraman Ravindran, Abdurrahman Habib
B
Balaraman Ravindran
4 arguments145 words per minute725 words297 seconds
Argument 1
Emphasis on AI literacy for all citizens, not just researchers
EXPLANATION
The speaker argues that AI literacy should be universal, enabling every individual to use AI to improve their work and daily life, rather than focusing solely on producing AI researchers. This broadens the concept of capacity building beyond academia.
EVIDENCE
He observes that everyone wants to know and use AI, emphasizing that capacity building is about using AI to do tasks better, not just about creating more researchers, and stresses the need for AI literacy across the population [112-119].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
IGF 2023 remarks stress that AI literacy must be universal, extending beyond academia to all citizens, supporting Ravindran’s point [S36]. Further, governments are urged to build learning systems for the AI era, reinforcing the need for broad AI education [S37].
MAJOR DISCUSSION POINT
Universal AI literacy
AGREED WITH
S. Krishnan, Abdurrahman Habib, Fitsum Assamnew Andargie
Argument 2
Need for global representation and evidence‑driven scientific panel
EXPLANATION
The speaker stresses that the scientific panel must engage experts from the global majority to be credible, calling for broader representation and evidence‑based analysis. Without such inclusion, the panel’s recommendations would lack legitimacy.
EVIDENCE
He notes that unless there is meaningful engagement with the global majority, the panel’s work will be futile, and points out the difficulty in finding sufficient representation from the global south, highlighting the need for broader participation [133-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The open forum on building an international AI cooperation ecosystem highlights the necessity of inclusive, evidence-based scientific panels with representation from the global majority [S29].
MAJOR DISCUSSION POINT
Inclusive scientific governance
AGREED WITH
Mehdi Snene, Seydina Moussa Ndiaye
Argument 3
AI literacy as a universal right, ensuring no one left behind
EXPLANATION
The speaker frames AI literacy as a fundamental right, arguing that future AI readiness assessments should reflect universal competence. He envisions a world where all countries achieve the highest AI readiness level.
EVIDENCE
He proposes that in five years the UN would need to redo its AI readiness categorisation because everyone would be at the top level, indicating universal AI literacy as a baseline expectation [235-239]; he also reiterates that everyone should know how to use AI to improve outcomes [118-120].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for universal AI literacy and inclusive education appear in IGF discussions on education, inclusion, and literacy [S36], as well as in policy recommendations urging governments to ensure no one is excluded from AI benefits [S37].
MAJOR DISCUSSION POINT
AI readiness for all
Argument 4
Redefinition of AI readiness categories by 2029
EXPLANATION
Building on the previous point, the speaker predicts that by 2029 the UN will have to restructure its AI readiness framework, moving from four levels to a single top tier, reflecting widespread competence. This signals a transformative shift in global AI capacity.
EVIDENCE
He explicitly states that the UN would have to redo the categorisation, splitting the topmost level into four instead of having four levels, implying that all countries would reach the highest tier [235-239].
MAJOR DISCUSSION POINT
Future AI readiness taxonomy
S
Seydina Moussa Ndiaye
2 arguments95 words per minute407 words255 seconds
Argument 1
Adoption of cooperation framework and blueprint for new centers
EXPLANATION
The speaker outlines that the network has agreed on a cooperation framework and is developing a blueprint to guide the establishment of new AI capacity‑building centers. This formal structure is intended to streamline expansion.
EVIDENCE
He explains that the cooperation framework was adopted during the Dakar workshop, that an offer sheet is being stabilised, and that a blueprint for building new centers is being drafted, with Audet preparing the first draft [162-169].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Dakar workshop’s adoption of a cooperation framework and the drafting of a blueprint for new AI capacity-building centers are documented in the summit’s proceedings [S12].
MAJOR DISCUSSION POINT
Framework for scaling centers
AGREED WITH
Mehdi Snene, Balaraman Ravindran
Argument 2
Planned meetings and blueprint implementation to scale the network
EXPLANATION
The speaker mentions upcoming meetings, including a potential third gathering in Riyadh, as part of the roadmap to operationalise the blueprint and expand the network. This signals continued momentum.
EVIDENCE
He notes that after the Dakar workshop, the next step may be a third meeting in Riyadh before the July summit, indicating concrete planning for further scaling [175-177].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same summit notes a prospective third meeting in Riyadh to operationalise the blueprint before the July summit, confirming the scaling roadmap [S12].
MAJOR DISCUSSION POINT
Future network scaling events
V
Vilas Dhar
3 arguments203 words per minute999 words295 seconds
Argument 1
Network as platform for building collaborative institutions
EXPLANATION
The speaker positions the network as a mechanism to create and strengthen institutions that will guide AI development and governance across nations. He argues that institutional innovation is essential for coordinated AI progress.
EVIDENCE
He states that the network gives the ability to build institutions that will guide the AI future, and that collaboration across countries, sectors, and topics will require governments to set policies enabling data sharing and regional centers of excellence [255-259][260-264].
MAJOR DISCUSSION POINT
Institutional innovation through network
AGREED WITH
Amit Shukla, Balaraman Ravindran, Seydina Moussa Ndiaye, Mehdi Snene
Argument 2
Call for gender‑balanced leadership in AI centers
EXPLANATION
The speaker urges that AI capacity‑building centers should have equal representation of men and women in leadership roles, highlighting gender parity as a metric of inclusive governance.
EVIDENCE
He explicitly asks whether the next conversation will see an equal number of men and women leading AI centers worldwide [278-280].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AUDA-NEPAD white paper on AI regulation stresses the importance of gender parity in digital initiatives, supporting the call for equal men-women representation in AI centre leadership [S32].
MAJOR DISCUSSION POINT
Gender parity in AI leadership
AGREED WITH
Abdurrahman Habib, Moderator, Anne Marie Engtoft Meldgaard
Argument 3
Institutional innovation and policy frameworks to guide AI future
EXPLANATION
Reiterating his earlier point, the speaker stresses that beyond rapid technological change, there is a need for policy frameworks and institutional mechanisms that can steer AI development responsibly. This underscores the role of governance structures.
EVIDENCE
He asks where institutional innovation is occurring, notes that no country is far ahead or far behind, and argues that the network enables building institutions that will guide AI’s future, requiring policy that allows data sharing and regional excellence [255-259][260-264].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Keynote remarks underline that policymakers must create inclusive rules and institutional frameworks to steer AI responsibly, aligning with Dhar’s emphasis on institutional innovation [S30].
MAJOR DISCUSSION POINT
Policy frameworks for AI governance
AGREED WITH
Amit Shukla, S. Krishnan
E
Eugenio Garcia
2 arguments120 words per minute207 words102 seconds
Argument 1
Brazil’s commitment and university participation in the network
EXPLANATION
The speaker affirms Brazil’s full support for the UN AI capacity‑building network and cites the participation of two Brazilian federal universities, demonstrating concrete national involvement.
EVIDENCE
He notes Brazil’s support, references President Lula’s statement on multilateral AI governance, and lists the Federal University of Pernambuco and the Federal University of Rio Grande do Sul as network participants [337-345].
MAJOR DISCUSSION POINT
National commitment and institutional involvement
Argument 2
Emphasis on multilateral governance and alignment with the Global Digital Compact
EXPLANATION
The speaker links the AI network to broader multilateral initiatives, emphasizing that it should complement the Global Digital Compact and reinforce multilateral governance structures.
EVIDENCE
He references President Lula’s statement about the role of nations in AI governance, stresses the need to defend the multilateral system, and says the network will complement the AI track of the Global Digital Compact [339-347].
MAJOR DISCUSSION POINT
Multilateral AI governance alignment
M
Mehdi Snene
1 argument145 words per minute843 words348 seconds
Argument 1
Member‑state led, evidence‑driven approach to capacity building
EXPLANATION
The moderator (Mehdi Snene) emphasizes that the AI capacity‑building network is driven by member states and relies on evidence‑based methods, underscoring its legitimacy and collaborative nature.
EVIDENCE
He thanks the participants, notes that the initiative is member-state-led and evidence-driven, and references Professor Ravi’s involvement from IIT Madras as an example of scientific participation [97-100].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The moderator’s opening remarks describe the AI capacity-building network as member-state-led and evidence-based, confirming the claim [S12].
MAJOR DISCUSSION POINT
Evidence‑based, member‑state ownership
AGREED WITH
Balaraman Ravindran, Seydina Moussa Ndiaye
A
Anne Marie Engtoft Meldgaard
2 arguments195 words per minute828 words254 seconds
Argument 1
Four pillars for meaningful tech coexistence: identity, community, agency, purpose
EXPLANATION
The speaker proposes a framework consisting of identity, community, agency, and purpose to ensure technology serves humanity responsibly. These pillars guide inclusive and purposeful AI deployment.
EVIDENCE
She outlines the four ingredients: identity (maintaining humanity), community (human connections), agency (empowering individuals to influence technology), and purpose (defining why technology is used), providing concrete reflections on each [304-330].
MAJOR DISCUSSION POINT
Framework for responsible tech coexistence
AGREED WITH
Abdurrahman Habib, Vilas Dhar, Moderator
Argument 2
Framework for responsible AI coexistence emphasizing identity, community, agency, purpose
EXPLANATION
Reiterating her earlier points, the speaker stresses that the network should embed these four pillars into its operations, ensuring AI development aligns with human values and societal goals.
EVIDENCE
She restates the same four pillars-identity, community, agency, purpose-and connects them to the need for inclusive, purposeful AI that serves collective aspirations [302-330].
MAJOR DISCUSSION POINT
Ethical AI governance framework
M
Moderator
2 arguments130 words per minute579 words266 seconds
Argument 1
AI resources must be democratized and kept human‑centred
EXPLANATION
The moderator highlights that equitable access to AI tools and keeping people at the centre of AI development are essential for inclusive benefits. This underscores the need for policies that prevent concentration of AI power and ensure that technology serves societal needs.
EVIDENCE
After thanking Mr. Krishnan, the moderator explicitly praises his remarks on “democratizing access to AI resources as well as keeping humans at the center,” signalling these as key priorities for the discussion [7].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote stresses democratizing access to AI tools and keeping humans at the centre of AI development, echoing the moderator’s priority [S30]; the moderator also highlighted this in the summit’s closing remarks [S12].
MAJOR DISCUSSION POINT
Democratizing AI access and human‑centred AI
Argument 2
Gender diversity and broader inclusion are critical for AI capacity‑building centres
EXPLANATION
The moderator stresses that future AI capacity‑building initiatives must ensure equal representation of men and women in leadership roles, reflecting a commitment to gender balance and inclusive participation across the network.
EVIDENCE
During the closing remarks, the moderator thanks Mr. Govan (Vilas Dhar) for his reflections and specifically highlights “the need for diversity in participation,” emphasizing gender balance among AI centre leaders [291-298].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AUDA-NEPAD white paper highlights gender diversity as essential for equitable AI development, supporting the moderator’s call for balanced leadership [S32].
MAJOR DISCUSSION POINT
Gender‑balanced leadership in AI capacity‑building
AGREED WITH
Abdurrahman Habib, Vilas Dhar, Anne Marie Engtoft Meldgaard
Agreements
Agreement Points
Broad AI education and capacity building for all citizens, including workforce, students, and women
Speakers: S. Krishnan, Balaraman Ravindran, Abdurrahman Habib, Fitsum Assamnew Andargie
Nationwide AI curriculum and workforce retraining Emphasis on AI literacy for all citizens, not just researchers Women Elevate program delivering AI certification to thousands of women Distribution of human and compute capacity across member states
All speakers stress the need for inclusive AI education and capacity building that reaches every segment of society – from school children and university students to current workers and women – ensuring that AI literacy becomes universal and that both human and computational resources are evenly distributed. [1-4][112-119][72-84][85-90][230-232]
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on inclusive AI education aligns with India’s long-standing ITEC programme and its advocacy for international AI capacity-building cooperation, especially for the global south [S46], and reflects the evidence-based capacity-building initiatives highlighted in recent AI policy roadmaps [S48].
Establishment and operationalisation of a global network of AI capacity‑building centres
Speakers: Amit Shukla, Balaraman Ravindran, Seydina Moussa Ndiaye, Vilas Dhar, Mehdi Snene
Creation of Global Network to bridge AI capacity divide Need for global representation and evidence‑driven scientific panel Adoption of cooperation framework and blueprint for new centers Network as platform for building collaborative institutions Member‑state led, evidence‑driven approach to capacity building
The participants agree on creating a coordinated global network of AI capacity-building centres, backed by a cooperation framework and blueprint, with evidence-driven scientific panels and institutional support, driven by member-states. [13-19][26-28][133-138][162-169][255-259][260-264][97-100]
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for a coordinated global network echo the consensus in the Global AI Policy Framework to build on existing institutions and promote inclusive governance across regions [S50], and are reinforced by India’s push for an international AI capacity-building infrastructure [S46].
Gender inclusion and women’s empowerment in AI capacity building and leadership
Speakers: Abdurrahman Habib, Vilas Dhar, Moderator, Anne Marie Engtoft Meldgaard
Women Elevate program delivering AI certification to thousands of women Call for gender‑balanced leadership in AI centers Gender diversity and broader inclusion are critical for AI capacity‑building centres Four pillars for meaningful tech coexistence: identity, community, agency, purpose
Speakers converge on the importance of gender balance, highlighting programmes that train women, calling for equal representation of men and women in centre leadership, and emphasizing inclusive values such as identity and community as essential for responsible AI development. [72-84][85-90][278-280][291-298][298-299][304-330]
POLICY CONTEXT (KNOWLEDGE BASE)
This priority is supported by the UNESCO Recommendation on the Ethics of AI which urges maximising AI’s contribution to gender equality [S56], by GPAI’s multistakeholder focus on gender issues [S55], and by IGF workshops that stress gender-responsive capacity building and the inclusion of women in AI policy processes [S54][S62].
AI as a catalyst for socio‑economic development and the need for equitable benefit sharing
Speakers: Amit Shukla, S. Krishnan, Vilas Dhar
AI as an enabler for welfare; need equitable benefit sharing Nationwide AI curriculum and workforce retraining Institutional innovation and policy frameworks to guide AI future
All agree that AI should be harnessed to promote welfare and economic growth, but its benefits must be shared fairly, requiring supportive policies and institutional innovation. [13-16][1-4][255-259][260-264]
POLICY CONTEXT (KNOWLEDGE BASE)
The view of AI as a development catalyst is reflected in the UN Secretary-General’s remarks on equitable AI for development [S65] and in broader policy discussions linking AI to shared prosperity and sustainable development goals [S64][S50].
Reliance on evidence‑driven, monitored approaches for AI capacity building
Speakers: Mehdi Snene, Balaraman Ravindran, Seydina Moussa Ndiaye
Member‑state led, evidence‑driven approach to capacity building Need for global representation and evidence‑driven scientific panel Adoption of cooperation framework and blueprint for new centers
The speakers underline that capacity-building initiatives must be grounded in data, evidence, and systematic monitoring, with clear frameworks and blueprints to guide implementation. [97-100][133-138][162-169]
POLICY CONTEXT (KNOWLEDGE BASE)
Evidence-based policymaking is a core recommendation of the AI Policy Research Roadmap, which calls for monitored, performance-benchmark approaches to capacity building [S48], and is further elaborated in proposals for rightsizing AI governance through shared evidence and cross-border procurement networks [S61].
Similar Viewpoints
Both stress large‑scale training programmes – Shukla through the ITEC initiative for officials worldwide and Krishnan through a national curriculum for students and workers – as essential to build AI capacity. [20-24][1-4]
Speakers: Amit Shukla, S. Krishnan
ITEC program delivering AI training to officials from 160 countries Nationwide AI curriculum and workforce retraining
Both highlight that capacity building must be collaborative, with shared programmes, expertise and an even spread of both human talent and compute resources among member states. [68-71][94-95][230-232]
Speakers: Abdurrahman Habib, Fitsum Assamnew Andargie
Sharing of programs and expertise through the network Distribution of human and compute capacity across member states
Both call explicitly for gender parity in the leadership of AI capacity‑building centres, stressing diversity as a core requirement. [278-280][291-298]
Speakers: Vilas Dhar, Moderator
Call for gender‑balanced leadership in AI centers Gender diversity and broader inclusion are critical for AI capacity‑building centres
Both stress that AI capacity‑building must be embedded in multilateral governance structures and supported by innovative institutional policies. [339-347][255-259][260-264]
Speakers: Eugenio Garcia, Vilas Dhar
Emphasis on multilateral governance and alignment with the Global Digital Compact Institutional innovation and policy frameworks to guide AI future
Unexpected Consensus
Recognition that the global north and south share common challenges and can learn from each other in AI capacity building
Speakers: Anne Marie Engtoft Meldgaard, Fitsum Assamnew Andargie
Four pillars for meaningful tech coexistence: identity, community, agency, purpose Distribution of human and compute capacity across member states
Anne, representing a European perspective, states that the north and south have more in common than differences regarding upskilling, while Andargie emphasizes equitable distribution of capacity across all members, indicating a shared view that collaboration transcends geographic divides. [298-299][230-232]
POLICY CONTEXT (KNOWLEDGE BASE)
India’s advocacy for north-south cooperation in AI capacity building [S46] mirrors the Global AI Policy Framework’s call for inclusive governance that transcends traditional geopolitical divisions [S50], and aligns with cross-cultural dialogue initiatives promoting shared learning [S63].
Agreement on democratizing AI resources and keeping humans at the centre, expressed by both a UN moderator and an Indian official
Speakers: Moderator, Amit Shukla
AI resources must be democratized and kept human‑centred AI as an enabler for welfare; need equitable benefit sharing
The moderator explicitly praises democratisation of AI resources, while Shukla warns that without equitable sharing AI could widen divides, together underscoring a shared commitment to human-centred, accessible AI. [7][13-16]
POLICY CONTEXT (KNOWLEDGE BASE)
UN-led discussions have repeatedly stressed the need to democratise AI and maintain a human-centred approach, a stance echoed by Indian officials advocating for shared foundational resources [S52][S53][S50].
Overall Assessment

There is strong consensus among the participants that AI capacity building must be inclusive, evidence‑driven, and coordinated through a global network of centres. Key shared priorities include universal AI literacy, gender inclusion, equitable benefit sharing, and embedding the network within multilateral governance frameworks.

High consensus – the convergence of viewpoints across diverse regions and roles suggests a solid foundation for coordinated policy action, with implications that the network is likely to receive broad political support and can move towards concrete implementation of frameworks, blueprints, and inclusive programmes.

Differences
Different Viewpoints
Different preferred mechanisms for AI capacity building
Speakers: S. Krishnan, Amit Shukla, Abdurrahman Habib, Fitsum Assamnew Andargie
Nationwide AI curriculum and workforce retraining ITEC program delivering AI training to officials from 160 countries Women Elevate program delivering AI certification to thousands of women Ethiopia’s AI Institute and national AI policy driving local capacity
Krishnan proposes embedding AI education across all school, university and industry retraining programmes within India [1-4]. Shukla stresses using the long-standing ITEC programme to train officials from many countries, including AI courses, as the main outreach tool [20-24]. Habib focuses on a gender-targeted online certification programme (Women Elevate) that reaches thousands of women across 86 countries [72-84]. Andargie describes a national institutional approach centred on Ethiopia’s AI Institute and university AI courses as the basis for capacity building [191-199]. All agree on the need to build capacity, but they diverge on whether the priority should be national curricula, international official training, gender-focused online certification, or national AI institutes.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy debates highlight divergent mechanisms-from traditional training programmes to evidence-driven networks and cross-border procurement policies-illustrating the lack of consensus on the optimal capacity-building model [S48][S61].
Role of the private sector versus government‑led institutions in shaping AI governance
Speakers: Vilas Dhar, Amit Shukla
Network as platform for building collaborative institutions (government‑driven) and avoiding reliance on private sector ITEC program delivering AI training and leveraging existing Indian expertise (includes private‑sector partnerships)
Vilas Dhar argues that the network should build institutions that guide AI’s future and explicitly states that “we will not look to the private sector to define that conversation for us” [255-259][260-264]. Shukla, while emphasizing international cooperation, highlights India’s AI mission and ITEC programme which involve partnerships with industry and mentions “Our achievement on integration of DPI solutions and adoptions into AI” [29-31], implying a role for private-sector technology and solutions. The tension lies in the extent to which private actors should be involved versus a primarily government-driven model.
POLICY CONTEXT (KNOWLEDGE BASE)
The blurring of lines between regulatory development and standards work underscores tensions between private-sector influence and government leadership, while recent IGF sessions stress the importance of balanced public-private partnerships for scaling AI [S49][S58][S66].
Optimism about universal AI readiness versus recognition of persistent capacity gaps
Speakers: Balaraman Ravindran, Amit Shukla
AI literacy as a universal right, ensuring no one left behind (future UN re‑categorisation) Creation of Global Network to bridge AI capacity divide
Ravindran envisions that within five years the UN will have to redo its AI-readiness categorisation because all countries will reach the top level, effectively eliminating the divide [235-239]. Shukla, however, stresses that currently only countries with AI capabilities reap benefits and that a collaborative network is needed to prevent a widening divide [13-19]. This reflects a disagreement between a highly optimistic projection of universal readiness and a more cautious assessment of existing disparities.
POLICY CONTEXT (KNOWLEDGE BASE)
Forum participants expressed optimism about AI readiness but simultaneously acknowledged infrastructure, skills, and awareness gaps that continue to limit universal adoption [S57][S59][S60].
Unexpected Differences
Extent to which gender‑focused programmes satisfy broader gender‑parity goals in AI leadership
Speakers: Abdurrahman Habib, Vilas Dhar, Moderator
Women Elevate program delivering AI certification to thousands of women Call for gender‑balanced leadership in AI centres Gender diversity and broader inclusion are critical for AI capacity‑building centres
Habib highlights a successful women-focused training programme but does not address leadership representation, while Dhar and the Moderator explicitly call for equal numbers of men and women leading AI centres [278-280][291-298]. The unexpected tension is that a large women-training initiative is not automatically seen as fulfilling the demand for gender parity in governance structures.
POLICY CONTEXT (KNOWLEDGE BASE)
IGF workshops and UNESCO recommendations call for gender-responsive capacity building and question whether targeted programmes alone can achieve full gender parity in AI leadership, highlighting the need for broader systemic measures [S54][S55][S62].
Overall Assessment

The panel shows strong consensus on the need for a global AI capacity‑building network and inclusive AI education, but diverges on the primary delivery model (national curricula vs international official training vs gender‑focused online programmes vs national AI institutes), the role of the private sector, and the realistic timeline for achieving universal AI readiness.

Moderate disagreement: while all participants share the same end‑goal, the differing strategic preferences could affect coordination, resource allocation, and policy design, potentially slowing the network’s implementation unless a harmonised approach is negotiated.

Partial Agreements
All speakers affirm the importance of a global AI capacity‑building network, the need for inclusive AI education, and the sharing of expertise across countries. They differ on the specific mechanisms, institutional arrangements, and thematic emphases, but converge on the overarching goal of equitable AI development and cooperation [7][13-19][68-71][162-169][112-119][97-100][255-259][302-330][337-345].
Speakers: S. Krishnan, Amit Shukla, Abdurrahman Habib, Fitsum Assamnew Andargie, Balaraman Ravindran, Seydina Moussa Ndiaye, Vilas Dhar, Anne Marie Engtoft Meldgaard, Eugenio Garcia
AI as an enabler for welfare and need for equitable benefit sharing Creation of Global Network to bridge AI capacity divide Sharing of programs and expertise through the network Adoption of cooperation framework and blueprint for new centers Emphasis on AI literacy for all citizens Member‑state led, evidence‑driven approach to capacity building Network as platform for building collaborative institutions Four pillars for meaningful tech coexistence: identity, community, agency, purpose Brazil’s commitment and university participation in the network
Takeaways
Key takeaways
AI capacity building is being pursued at all education levels, from primary school curricula to university retraining and workforce upskilling. The Global Network of Centres for Exchange and Cooperation on AI Capacity Building is established to bridge the AI capacity divide, especially for the Global South. Existing programmes such as India’s ITEC, Saudi Arabia’s Women Elevate, and Ethiopia’s AI Institute are being leveraged as models within the network. Inclusion and diversity are central: targeted training for women, public‑servant females, and a call for gender‑balanced leadership of AI centres. Multilateral governance and alignment with the Global Digital Compact are emphasized as essential for equitable AI development. A cooperation framework and a draft blueprint for creating new centres have been adopted, with plans for further meetings (e.g., Riyadh) and multi‑country projects. Long‑term vision includes exponential growth of the network, redistribution of both human and compute capacity, and a re‑classification of AI readiness levels by 2030.
Resolutions and action items
Adoption of a cooperation framework for the network during the Dakar workshop. Development of an “offer sheet” and a detailed blueprint to guide the establishment of new AI capacity‑building centres. Planning of a third network meeting in Riyadh before the July summit. India to expand AI courses within the ITEC programme and continue fully funded training for officials from partner countries. Saudi Arabia to continue scaling the Women Elevate programme toward its 25,000‑woman target and extend it to public‑servant females. Ethiopia to integrate its AI Institute activities with the network for regional collaboration. Brazil to join the network through two federal universities (Pernambuco and Rio Grande do Sul) and commit further institutional participation. Panelists called for gender‑balanced representation in centre leadership and in the UN scientific panel. Commitment to share programmes, use‑cases, and develop multi‑country AI projects across the network.
Unresolved issues
Specific funding mechanisms and long‑term financial sustainability of the network were not detailed. Concrete metrics and monitoring frameworks to assess the impact of capacity‑building activities remain undefined. Adequate representation of Global South experts on the UN scientific panel was noted as insufficient. How to address the compute‑infrastructure gap, especially for countries lacking high‑performance resources, was raised but not resolved. Operational details for translating AI governance frameworks into day‑to‑day practice were not clarified.
Suggested compromises
Emphasising collaborative, multilateral approaches rather than relying solely on private‑sector leadership (Vilas Dhar). Combining online training with mentorship and certification to broaden reach while managing resource constraints (Women Elevate programme). Utilising existing ITEC training infrastructure to deliver AI courses, thereby sharing resources across countries (Amit Shukla). Balancing national sovereignty concerns with shared regional compute centres and data‑sharing protocols (Anne Marie Engtoft Meldgaard’s four pillars).
Thought Provoking Comments
We are looking at making AI truly inclusive and train the next generation to adapt to AI, teaching it from class three in schools and across all university courses.
Highlights a concrete, systemic approach to democratizing AI education from early schooling, framing AI literacy as a universal right rather than a niche skill.
Set the agenda for the discussion around inclusivity, prompting subsequent speakers to address capacity gaps in the Global South and to propose concrete training programs.
Speaker: S. Krishnan
Only countries with AI capabilities can reap the full benefits; we must collectively address this anomaly to ensure AI benefits are equitably shared, otherwise the technology could widen the global divide.
Frames AI capacity as a geopolitical equity issue, moving the conversation from technical training to international cooperation and responsibility.
Shifted the tone toward a global‑south perspective, leading participants like Abdurrahman Habib and Balaraman Ravindran to discuss specific capacity‑building initiatives and the need for collaborative networks.
Speaker: Amit Shukla
Our Women Elevate program aims to empower 25,000 women globally in AI; in one year we trained 6,000 women with an 89% completion rate, covering 86 countries, and we are also training public‑servant women in Kenya.
Introduces gender‑focused capacity building with measurable outcomes, emphasizing the importance of inclusive participation beyond generic training numbers.
Deepened the discussion on diversity, prompting the moderator and later speakers to highlight the need for gender balance and inspiring the UN advisor’s later remarks on inclusive participation.
Speaker: Abdurrahman Habib
Capacity building is not just about improving AI research; it’s about giving everyone the ability to use AI to do what they want better. Nobody should be the dinner.
Broadens the definition of AI capacity to everyday utility and uses a striking metaphor (“nobody is the dinner”) that challenges the audience to think beyond elite research circles.
Reoriented the conversation toward practical AI literacy for all sectors, influencing subsequent speakers to discuss institutional roles and the need for widespread AI education.
Speaker: Balaraman Ravindran
Innovation in institutions is lagging behind rapid tech advances; we need new institutional models that let governments set policy, share data, and create regional centers of excellence rather than leaving the private sector to define the AI future.
Identifies a critical gap between technological innovation and institutional governance, calling for a systemic shift in how AI is regulated and coordinated globally.
Created a turning point that moved the dialogue from capacity‑building programs to the broader challenge of AI governance, setting up the stage for discussions on frameworks and muscle memory.
Speaker: Vilas Dhar
Meaningful coexistence with technology requires four ingredients: identity, community, agency, and purpose. We must ensure people retain their human identity, build real communities, have agency over technology, and ask purposeful questions about its role.
Synthesizes the ethical and societal dimensions of AI into a memorable framework, linking technical capacity to human values and social cohesion.
Provided a conceptual anchor that resonated with the audience, prompting reflections on diversity, community building, and purpose‑driven AI, and reinforcing the earlier calls for inclusive, purpose‑oriented governance.
Speaker: Anne Marie Engtoft Meldgaard
Brazil fully supports the UN global AI capacity‑building network; two Brazilian universities are already joining, complementing the AI track of the Global Digital Compact and strengthening multilateralism.
Demonstrates concrete national commitment and ties the network to broader multilateral initiatives, reinforcing the theme of collective action.
Reinforced the momentum of the network’s expansion, showing that more countries are joining, and underscored the link between capacity building and global digital governance.
Speaker: Eugenio Garcia
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that moved the conversation from a broad statement of inclusivity to concrete challenges and solutions. Krishnan’s education pledge set the inclusive tone, Shukla highlighted the geopolitical urgency, and Habib’s gender‑focused program added depth to the equity narrative. Ravindran’s reframing of capacity building broadened the scope to everyday users, while Dhar’s critique of institutional lag shifted focus to governance structures. Meldgaard’s four‑ingredient framework anchored the ethical dimension, and Garcia’s national endorsement illustrated growing multilateral commitment. Together, these comments created a dynamic flow that progressed from problem identification to actionable pathways, emphasizing both technical capacity and the societal frameworks needed for AI to serve the global common good.

Follow-up Questions
How do you see the cooperation among the different networks and international organizations today?
Understanding coordination mechanisms is crucial for effective AI capacity building across regions.
Speaker: Abdurrahman Habib
What do you really expect from the network, and how do you see its value?
Clarifies the objectives and benefits of the network for the scientific panel and broader AI community.
Speaker: Balaraman Ravindran
How do you see the network in 2030 (or the next five years) and its contribution to the UN 2030 SDG goals?
Provides a long‑term vision for scaling impact, inclusivity and capacity building worldwide.
Speaker: Abdurrahman Habib, Fitsum Assamnew Andargie, Balaraman Ravindran
Develop a blueprint for establishing new AI capacity‑building centres in countries that do not yet have one.
A standardized guide will help more nations create effective centres and join the network.
Speaker: Seydina Moussa Ndiaye
Stabilise and finalise the ‘offer sheet’ that lists services each centre can provide to the network.
A clear catalogue of services is essential for coordinated collaboration and resource sharing.
Speaker: Seydina Moussa Ndiaye
Re‑assess the global AI‑readiness categorisation and possibly redefine its levels as capacities improve.
Accurate readiness metrics will inform policy, funding and training priorities worldwide.
Speaker: Balaraman Ravindran
Evaluate the success rates and outcomes of the Women Elevate programme, especially in the Global South.
Measuring impact will guide improvements and demonstrate gender‑focused capacity building effectiveness.
Speaker: Abdurrahman Habib
Investigate the AI capacity divide between the Global North and South and identify mechanisms to bridge it.
Addressing this divide is key to ensuring equitable AI benefits and preventing widening disparities.
Speaker: Amit Shukla
Design institutional models for cross‑country and cross‑sector AI collaboration that go beyond private‑sector‑led initiatives.
New governance structures are needed to enable shared data, compute resources and policy coordination.
Speaker: Vilas Dhar
Translate AI governance frameworks into practice, building the ‘muscle memory’ for collaboration and implementation.
Operationalising policies will ensure that AI governance is effective, not just theoretical.
Speaker: Vilas Dhar
Study the role of identity, community, agency and purpose in AI adoption and governance to ensure inclusive, human‑centred AI.
These four ingredients are critical for meaningful coexistence with technology and for equitable outcomes.
Speaker: Anne Marie Engtoft Meldgaard
Assess diversity and gender balance in AI capacity‑building leadership and participation across the network.
Ensuring equal representation promotes fairness and broadens the pool of ideas and solutions.
Speaker: Vilas Dhar, Anne Marie Engtoft Meldgaard
Address compute‑resource disparities, as only a few countries possess the world’s compute capacity.
Understanding infrastructure gaps is necessary for designing equitable access strategies.
Speaker: Anne Marie Engtoft Meldgaard
Measure the impact and coordination effectiveness of multi‑country AI projects within the network.
Evaluating collaborative projects will reveal best practices and areas for improvement.
Speaker: Seydina Moussa Ndiaye
Track the effectiveness of AI training and retraining programmes in higher education and the workforce to ensure skill development aligns with AI integration.
Monitoring outcomes will help refine curricula and retraining initiatives for broader AI literacy.
Speaker: S. Krishnan
Develop metrics for AI literacy across school and higher‑education levels to monitor inclusivity and progress.
Standardised metrics will enable assessment of how well AI education is being integrated at all stages.
Speaker: S. Krishnan

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

The Innovation Beneath AI: The US-India Partnership powering the AI Era

The Innovation Beneath AI: The US-India Partnership powering the AI Era

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, moderated by Ujwal Kumar, gathered leaders from academia, venture capital, industry and government to explore how the rapid expansion of AI is creating a pressing need for new physical infrastructure at scale [13-16]. They noted that the United States and India are jointly building a rare-earth corridor and that Google has pledged $15 billion for a gigawatt-scale AI hub in Vizag, together with new subsea cables, which they described as the largest infrastructure build-out in history [17-22].


Tuan Ho highlighted that over 90 % of rare-earth magnets currently come from China, creating a strategic vulnerability for the U.S., and that venture capital is targeting supply-chain solutions such as Vulcan Elements to secure these critical minerals [28-33]. He argued that the US-India partnership offers a “huge opportunity” to develop refining capacity and sourcing routes for AI-related inputs, and that investors should focus on low-hanging-fruit infrastructure that has not been innovated for decades [40-46][55-57].


Jeff Binder drew a parallel with the early Internet boom, observing that today’s AI entrepreneurs have smarter tool-sets that can dramatically reduce capital requirements and enable cross-border talent to collaborate more efficiently [66-71][80-85]. He warned, however, that the rush to build compute capacity could lead to an “over-build” where excess infrastructure drives down ROI, making early-stage investing more challenging [94-97].


Vrushali Gaud emphasized that AI’s “full stack” includes not only models but also the underlying physical layer-materials, data-center construction, energy and water-and that India is attractive because of its billion-plus user base, youthful population and favorable clean-energy economics [109-115][140-146]. She pointed to Google’s recent announcements of subsea cables linking the U.S. and India and a broader network reaching Africa, Singapore and Australia, underscoring the importance of connectivity for edge deployments [130-138].


Prince Dhawan argued that AI scaling will be limited more by programmable, resilient grids than by chips, and described the India Energy Stack that creates a digital interoperable layer allowing distributed rooftop solar to supply data-centers in near-real-time [162-176][181-188]. He added that while renewable generation capacity exists, the key challenge is coordination at scale, and cited Reliance’s trillion-dollar vision as further evidence of massive investment in India’s energy infrastructure [190-192].


When asked whether current funding matches needs, Tuan Ho said there is a mismatch, with too much capital chasing pure AI models and insufficient focus on durable infrastructure businesses that solve clear problems [269-277]. Jeff reinforced this by noting that AI products are now measurable, unlike the dot-com era, but rapid hardware advances could render data-center investments obsolete, heightening risk for investors [286-302]. Tobias Helbig projected a second wave of AI that moves from data-center “beasts” to billions of edge devices, requiring ultra-low-power chips and new semiconductor designs [214-227].


The discussion closed on an optimistic note, with participants highlighting unprecedented government financing in the U.S., India and other countries as a catalyst for an industrial revolution driven by AI-enabled infrastructure [369-377].


Keypoints


Major discussion points


AI’s next-generation infrastructure goes far beyond models – the panel stressed that scaling AI requires massive investments in critical minerals, semiconductors, energy supply and data-center construction, with the United States and India forging a strategic “critical minerals corridor” and a $15 billion AI hub in Vizag [15-22][24][26-46].


Investment landscape and the risk of over-building – venture capitalists highlighted abundant “low-hanging fruit” in power-grid upgrades, mineral refining and other legacy-industry upgrades, but warned that rapid capital inflows could lead to an over-supply of compute assets and ROI challenges for both hardware and model developers [52-57][94-97][269-277][286-302].


India’s clean-energy grid and the “India Energy Stack” as a catalyst for AI – Prince explained that programmable, interoperable grids (e.g., P2P trading of rooftop solar) are the true bottleneck for AI compute, and that India’s single-frequency, digitally-enabled grid architecture can unlock distributed power for data centres [161-190][170-188].


Shift toward edge computing and decentralised AI workloads – both Tobias and Jeff argued that the future will move from centralized data-centres (“the five computers”) to billions of edge devices that consume minimal power yet deliver high-value AI services, making decentralisation a core strategic focus [221-226][236-238][232-240].


Google’s Climate Tech Centre and broader innovation ecosystem – Vrushali described Google’s new Climate Technology Center, which will fund skilling, low-carbon materials and sustainable aviation-fuel pilots in Tier-2/3 Indian cities, aiming to translate AI breakthroughs into concrete, climate-positive outcomes [339-367].


Overall purpose / goal of the discussion


The session was convened as a forward-looking panel to map the full AI stack-from raw materials and energy infrastructure to edge deployment-and to identify where public policy, corporate investment (especially US-India collaboration), and venture capital can jointly close the gap between AI ambition and the physical capacity needed to sustain it.


Overall tone


The conversation began with high-energy optimism about the historic scale of AI-related infrastructure builds. As the dialogue progressed, speakers introduced a more analytical, cautionary tone, flagging risks such as over-investment, grid constraints, and rapid hardware obsolescence. The panel closed on a hopeful, forward-looking note, emphasizing collaborative opportunities and the promise of a “bright future” for AI-driven innovation.


Speakers

Participant


– Role/Title: Moderator/Host (introductory speaker)


Tobias Helbig


– Role/Title: Dr. Tobias Helbig, Vice President of Innovation at NXP Semiconductors


– Expertise: Semiconductor innovation, AI hardware, edge AI devices [S4][S5]


Ujjwal Kumar


– Role/Title: Founder and CEO of Quantum Alliance; Co-founder of Cognosy AI


– Expertise: AI infrastructure, policy, bridging AI models with physical supply chains [S6][S7]


Jeff Binder


– Role/Title: Serial entrepreneur with multiple Fortune 500 exits; Partner at Harvard Venture Partners


– Expertise: Entrepreneurship, venture capital, scaling AI-driven startups [S8]


Vrushali Gaud


– Role/Title: Global Director of Climate Operations at Google


– Expertise: Climate operations, clean-energy transition, AI-driven sustainability initiatives [S9][S10][S11]


Prince Dhawan


– Role/Title: IAS officer, Executive Director at REC Limited (Ministry of Power)


– Expertise: Power sector reform, digital public infrastructure for energy, AI-energy integration [S12][S13]


Tuan Ho


– Role/Title: Partner at X Fund; former unicorn founder


– Expertise: Venture capital, critical minerals supply chain, AI infrastructure investment [S14]


Additional speakers:


Sundar Pichai – Role/Title: CEO of Alphabet/Google (mentioned)


Sam Altman – Role/Title: CEO of OpenAI (mentioned)


Joel – Role/Title: (mentioned, no further details)


John – Role/Title: (mentioned, no further details)


Rukhsani – Role/Title: (mentioned, no further details)


Full session reportComprehensive analysis and detailed insights

Opening & Panel Introduction


Ujwal Kumar, founder and CEO of Quantum Alliance, opened the session and introduced a cross-sector panel: Tuan Ho (unicorn founder-turned-venture-capitalist, partner at X Fund), Jeff Binder (serial entrepreneur, Harvard Venture Partners), Prince Thavan (IAS officer, Executive Director at REC Limited), Rushali Gaut (Google’s Global Director of Climate Operations), and Dr Tobias Helbig (VP of Innovation, NXP Semiconductors) [4-11].


Framing the Opportunity


Kumar framed the discussion around the “real opportunity” that lies not in AI models themselves but in the physical infrastructure required to run AI at scale [13-16]. He described AI-driven “creative destruction” of traditional infrastructure, demanding new supplies of critical minerals, energy, semiconductors and edge systems [15-17]. He highlighted joint US-India initiatives-a critical-minerals corridor, the FORGE framework launched by 54 countries for AI-critical minerals, and Google’s US$15 billion gigawatt-scale AI hub in Vizag together with four new US-India subsea cables [18-22]. Jensen’s comment at Davos that this represents “the largest infrastructure build-out in human history” underscored the historic scale of the endeavour [19].


Critical-Minerals & Supply-Chain


Tuan Ho expanded on the mineral supply-chain challenge, noting that more than 90 % of rare-earth magnets currently flow through China, creating a strategic vulnerability for the United States [29-33]. He cited Vulcan Elements – a venture backed by X Fund and now supported by a US$1.4 billion government partnership – as an early-stage investment aimed at building a domestic magnet supply chain [23-25]. Ho argued that the US-India critical-minerals corridor offers “huge opportunity” to develop refining capacity and diversify sourcing, and emphasized that many “low-hanging-fruit” infrastructure problems-particularly power-grid upgrades untouched for a century-represent immediate investment opportunities [40-46][52-57].


Entrepreneurial Landscape


Jeff Binder drew a parallel with the early Internet boom, observing that today’s AI entrepreneurs have “smarter tool-sets” that dramatically lower capital requirements and enable rapid cross-border collaboration, especially between the US and India [66-71][80-85]. He added that AI tools now enable front-end cultural alignment across the US, India and China, reducing the traditional barrier of “cultural mismatch” in product UI/UX [72-78]. Binder warned of an over-build risk: excess compute capacity could later drive down the cost of hardware and energy, making resources inexpensive relative to today [94-97]. He also noted that GPUs are typically financed by equity rather than debt because of rapid obsolescence, whereas power-related assets can attract debt financing [260-264].


Full-Stack Perspective


Rushali Gaut shifted the focus to the full AI stack, stressing that the “shiny objects” of models sit atop a physical layer that includes materials, data-centre construction, energy and water [109-115]. She explained why India is attractive: a billion-plus user base, a youthful, tech-savvy population, and a policy environment that supports clean-energy growth [140-146]. Gaut described Google’s network announcements-new subsea cables linking the US, India, Africa, Singapore and Australia-as essential for bringing edge workloads closer to users [130-138]. She also outlined the Google Climate Technology Center’s three outcome-based pilot pillars: green-skill programmes, low-carbon construction materials, and sustainable aviation-fuel pilots [340-346].


Grid Innovation – India Energy Stack


Prince Thavan introduced the “India Energy Stack”, a programmable, interoperable grid architecture that enables real-time, peer-to-peer trading of distributed rooftop solar to power data centres [161-166][170-188]. He emphasized that the binding constraint for AI scaling is not chip availability but grid intelligence and resilience, noting that renewable generation capacity in India is already sufficient; the challenge lies in coordination at scale, which the Energy Stack addresses by standardising measurement, identification and settlement [176-188]. Thavan also referenced the “one nation, one grid, one frequency” principle [162-164] and reminded the audience of Reliance’s trillion-dollar AI-infrastructure vision, underscoring massive private-sector commitment [190-192].


Semiconductor & Edge Future


Dr Tobias Helbig projected the next wave of AI from the current data-centre-centric “five computers” model to billions of ultra-low-power edge devices [214-227]. He illustrated this shift with examples such as a marathon-watch that runs for twelve days on a single charge, arguing that future AI value will come from devices that “think” locally rather than from ever-larger centralised farms [218-226]. Helbig warned that the industry tends to over-estimate short-term impact while under-estimating the decade-long evolution of semiconductor technology [308-313][320-322].


Panel Interactions & Nuanced Views


The panel repeatedly agreed that robust physical infrastructure-critical minerals, reliable grids, and data-centre/network capacity-is a prerequisite for AI scaling [13][29][161][109]. They also concurred that US-India collaboration is pivotal for securing rare-earth supplies and expanding semiconductor R&D [17][28][317-319][140-146]. Jeff warned of a potential over-build, while Kumar highlighted the unprecedented scale of the current build-out [19][94-97]. Regarding financing, Tuan stressed the need for government-backed programmes to close the gap between model funding and infrastructure needs [269-276]; Prince highlighted massive private-sector pledges (e.g., Reliance) and the equity-debt distinction for GPUs [260-264]; Jeff focused on the rapid acceleration of AI tooling and cross-border collaboration [66-71]. Helbig explicitly championed the next wave of ultra-low-power edge devices; Jeff acknowledged the emerging importance of edge but centered his remarks on tooling and collaboration [214-227][66-71].


Actionable Take-aways


– Deepen the US-India critical-minerals corridor and leverage government financing to de-risk grid-modernisation projects.


– Operationalise Google’s Climate Technology Center in India to deliver the three pilot pillars (green-skill programmes, low-carbon construction materials, sustainable-aviation-fuel pilots) [340-346].


– Encourage early-stage founders to target clear infrastructure problems (“low-hanging fruit”) such as power-grid upgrades and to adopt the latest AI tools to accelerate market entry [40-46][80-86].


Unresolved issues include quantifying the risk of AI-compute over-build, aligning long-term grid upgrades with rapid AI deployment cycles, and developing regulatory frameworks for real-time P2P energy trading [94-97][188-190][269-276].


Closing


Ujwal Kumar closed the session with a thank-you and expressed optimism for future innovators, urging coordinated policy, public-private financing and innovative entrepreneurship to harness AI’s transformative potential responsibly [378-382].


Session transcriptComplete transcript of the session
Participant

Thank you. Thank you. Thank you. this infrastructure right now and closing the gap between commitments and capacity. This is where the real opportunity lives. Moderating today’s session is Ujwal Kumar, founder and CEO of Quantum Alliance and co -founder of Cognosy AI. Quantum Alliance works with universities, industry and governments to get top talent working on the foundational problems beneath AI, from critical minerals to energy to semiconductors. He will be joined by Tuan Ho, Unicorn founder turned venture capitalist, now partner at X Fund. Jeff Binder, serial entrepreneur with multiple Fortune 500 exits, now at Harvard Venture Partners. Prince Thavan, IAS officer and executive director at REC Limited under the Ministry of Power. Rushali Gaut, global director of climate operations at Google.

Dr. Tobias Helbig, V .I. and VP of Innovation at NXP Semiconductors. Ujwal, over to you. We’ll start with a quick picture for the panelists if you can all rise Thank you

Ujjwal Kumar

Thank you everyone We are up against Jan, we are up against her boss. So, but, let’s have fun in this panel. And the broader idea, like we have been hearing all about AI models, what AI can do, and this panel is more about, we are talking about AI at scale now, what it needs, what it would make fulfill when we talk about AI, like AI -driven companies, when we are talking about AI -driven solutions. Let’s talk about this now, as AI is forcing creative destruction of how the world builds infrastructure, energy, semiconductors, critical minerals, physical edge systems, data center. US and India are now building this together, rare earth corridors in India’s union budget. Google committed $15 billion to India and accelerated focus on clean energy.

Jensen at Davos called this the largest infrastructure build -out in human history. Two weeks ago, 54 countries launched FORGE, the first global framework for the minerals that power AI. Yesterday, on this very summit, Sundar Pichai laid out Google’s $15 billion commitment to India, a gigawatt -scale AI hub in Vizag, four new subsea cables between US and India. The models are getting attention, the infrastructure is getting the money, and we exactly have the right people to figure out where is all this going and what do we need further. Thank you. To start with, XFund was the early investor in Vulcan Elements. Now it is backed by 1 .4 billion US dollar government partnership to bring America’s rare earth magnet supply chain.

What, according to you, the US -India critical minerals corridor look like from the investor side?

Tuan Ho

First off, thank you for having us here. I’m really glad you pulled this panel together. I think your points earlier about the focus that we tend to put on discussing AI models and everything in the model layer, but we don’t really talk about what exists underneath that, is actually a really unique topic to cover and one that I and XFund, has generally been extremely excited about. so the way I look at that is that in this strive that we have to build intelligence we tend to talk a lot about the industrial revolution that it will create we often have to understand we often look at the industrial revolution that will be required in order to support the creation of that infrastructure there are a lot of inputs required for AI infrastructure so you’ve got energy you’ve got energy the power grid, power generation has to be clean, sustainable, renewable and the demands for AI infrastructure are going to require us to really solve large problems as to how to supply that power you’ve got, everybody’s talking about critical minerals now you mentioned Vulcan elements right You know, Vulcan Elements was a business that we invested in.

It was a Navy veteran out of Harvard who had spent a lot of time looking at supply chain issues for the U .S. military and noted that, you know, 95, over 90 % of, you know, magnets, rare earth magnets were coming through China. It creates a strategic vulnerability for the United States. And the reason why it creates a vulnerability is because, if you think about it, there are so many things that we need, that we build that require magnets. You can’t build hard drives. You can’t build motors. You can’t, I mean, nothing that moves can be built without them. We talk a lot about chips. You can’t manufacture chips without magnets. And so I look at, you know, problems like that, and for the first time, you know, I’m not going to be able to build a magnet.

I think you’ve got guys like me, venture capitalists, looking at… the opportunity to invest in building up that type of infrastructure to solve those sorts of problems. But that’s just one, right? You have to figure out how do you source it? Where do you get the materials from? And so when you look at things happening on the geopolitical scale, for the first time we are, at least in the United States, we’re looking at these trade deals to try to figure out where we’re going to supply the materials. Like how are we going to completely rebuild our power grid? How are we going to build up the capacity for refining those materials? Where are we going to source them from?

Where are we going to have to get them from? As we’re looking at data centers, how can we make them more sustainable, more power efficient? In order to support the AI needs that we have right now, like power consumption for data infrastructure, infrastructure is already, I think it’s about… It’s approaching 10 % plus. How are you going to meet that demand? And so from an investor perspective, yeah, we’re going to look at all of the cool AI products that entrepreneurs are looking to build. But on the other side, what is very exciting for us is looking at all the low -hanging fruit that exists for all of the inputs of industries that have not been innovated in for decades.

Power grids that have not been upgraded for the better part of a century. It creates huge opportunity for us as investors. And you mentioned US -India. Yeah. I… I find a lot of opportunity in the U .S. and India working more closely together to try to figure out how on both sides of the world we can build great companies to meet that need.

Ujjwal Kumar

Thanks, John. You spoke about needs. You spoke about the innovation. You spoke about what the early stage startups should be focusing on. I’ll move to Jeff, who has built companies from scratch and made multiple Fortune 500 exits. I would ask him what would it need for young entrepreneurs to build and scale in this space and do it successfully.

Jeff Binder

Thank you for having us and putting this together. I know that we have more people here than Sundar has at his keynote. So I heard Sam Altman only had 10 yesterday, so we’ve already outdone him. So, you know, I think it’s such an interesting time. I was there in the early web days in 99 and 2000 and 2001, and, you know, the excitement around the Internet obviously fueled a massive tech boom. And ultimately a fiber build -out, an infrastructure build -out, and it took years for all of that infrastructure to be absorbed, and ultimately it was. I think the difference this time around is that the tool sets themselves that entrepreneurs have available to them are smart. And they can bridge some of the challenges, especially since we’re talking about partnerships between the U .S.

and India. But, you know, oftentimes, especially when you get to things like user interfaces, there are cultural differences from the development work that would happen in India for an Indian audience or the U .S. or China. That’s always made it more difficult for collaboration, sort of on the front end. And many of the products that entrepreneurs are working on are often, you know, front end facing, consumer facing, at least initially. They’re generally not building a lot of B2B platforms. That happens later when you get the experience as to what’s necessary in a business environment. And I think that AI is going to change drastically the ability to leverage sort of cross -border talent, in particular with India and China and other places that were harder to leverage before.

It’s certainly from a quality perspective, SQA and back -end development, I think entrepreneurs have been able to leverage India and other places for the last couple decades. But it’s been harder to get the front end of a product to sort of match the cultural necessity of a given market. And I think that’s going to change. And I think for entrepreneurs, it means that they have a massive amount of leverage that they didn’t have before. And it means that we’re going to have a flood of new ideas that are actually brought to market and work fairly well and allow entrepreneurs to deliver products with probably a tenth the capital, depending on the product, obviously. If you’re doing magnets, you’re sort of stuck with the physical properties and refining and some of the things that you can’t do from an IT perspective.

But. I think for entrepreneurs, it’s an extraordinary opportunity. And those that will win. in my mind over the next few years are going to be the ones that leverage the tools most quickly because it’s not possible any longer to develop in the way that people were developing two or three years ago. If you do that, you’re going to be way late. And so now it’s about not so much your product, but learning what the state of the art is, which is literally changing every day in AI. And it’s a golden age, I think, for entrepreneurs. I think it’s going to be much, much more difficult for investors as an environment because the wealth of ideas are going to get much further along.

And that makes it more difficult, not less difficult, I think, to be an investor because you have more mature products. The entrepreneurs are going to be more mature and the entrepreneurs will have more leverage. And they may be able to make it to market much earlier than they would have otherwise, which means where they might have gone for a second round of seed capital, they may be able to get to market and be into revenue with a single small round of seed capital or no seed capital. And that makes that whole early, early stage ecosystem of angel and venture investing much more challenging. And so I think it’s just a great time. I do think that there’s a huge risk, and I don’t think it affects entrepreneurs or young entrepreneurs, but I do think there’s a huge risk of an overbuild.

It feels a lot like the leverage in terms of optimizing hardware and infrastructure is only going to get better, and it’s potentially going to leave us with actually a – I know right now we’re worried about power, we’re worried about compute, we’re worried about data centers, but I would project if we sat here two years from now, will be looking at a grand overbuild with a real challenge around ROI and how to make all these investments work. And so that’s going to be another positive for the entrepreneur because those resources are going to become very inexpensive relative to even what they are today. And so in that sense, I think it’s a great day for young entrepreneurs.

Ujjwal Kumar

Thanks, Jeff. Jeff, picking up from you about leaving from some of the AI tools, going to market faster, build out, ROIs, now we move to the right time, actually, when you spoke about ROIs. One of the things I was very curious about, all the world leaders coming here and putting a big bet on India. So, like, we just heard Sundar yesterday talking about $15 billion. New C cables between, like, with India. new innovation hubs. Rosalie, you are leading the clean energy transition with Google. I wanted to understand what Google’s, AI’s scale demand basically in terms of energy and why you are placing such heavy weight on India.

Vrushali Gaud

Okay, good. Thank you. Thank you all for joining and I appreciate it. I am being pitched against my boss, so I’m going to try and keep it as entertaining and as nice and valuable as I can. That’s a very interesting question in terms of the scale and why India. But I’ll build on a few things that both of you spoke about. One of the interesting things about this particular AI innovation timeframe you’re looking at is it’s what I call across the full stack. So you’re looking at a lot of things that are happening, which typically we talk about software, AI models, applications. That’s the shiny objects everybody talks about and very exciting. But then the amount of work that’s happening on underneath that, which is why I love this session, too, is beneath the AI.

The physical infrastructure layer of it is fascinating. And that goes from everything from the foundational layer that you’re talking about, which is your materials, your data center construction, your access to energy, to water, to all those foundations. So then how do you construct things the right way? We forget about the physical. These are all buildings, quite a few of them. How do you construct them the right way? And then how do you operate them the right way? And then the use of that. And so what we are seeing is just tremendous value and innovation across the entire stack of AI. Which I. Which I, as an engineer, find very, very fascinating. So in terms of Google, I think the.

The privilege and responsibility that Google has is how do we bring about the most value across that full stack, both from a business perspective but also from the impact perspective. And so a lot of the investments you’re seeing, you’re seeing across those pieces, right? So if you walked across this summit, you would hear different pieces of it. Our expo was mostly featured on the product side, so AI for education, AI for healthcare, AI for agriculture. Culture, how do you use AI in domains, contextualize it, and all of that has a layer of a country and where that context is. And then the announcements you talked about were a lot more on the physical side. So it’s what’s required for data centers.

You need good design, good builds, but then you also need network. And so the sub -C cable announcement is part of that. And if you read a little bit, it’s fascinating. It’s an India -America connection, but it goes one way we are building is across Africa. So that’s a big reason. It’s a big reason to bring on board. And then the other way it goes from Singapore and Australia. So it’s a fascinating network, which, again, you can only build data centers, but what’s the point if you can’t actually use it and network and bring them closer to wherever the edge cases are? So super excited about those pieces. Now going to your point about why India.

So why not India, I think, is what I would start with. But most people know it’s a billion -plus user. It’s a great growth market. It’s a lot of young population who we think are going to be the frontier of the growth. It’s a lot of population who also is very eager about tech and tech adaptation. So if you think about what happened with fintech and the phones and digital tech, a lot of the APAC countries, Asia and Global South, jumped ahead. I see people who didn’t even have credit cards. Now everybody uses GPay and UPI and all of those, right? So there’s a whole revolution where you can skip and build. And I think that’s another big exciting part of investing in India is can a generation of innovators come up?

Who don’t have the linear growth that we’ve seen in other regions but can leapfrog it? and I from an operational perspective feel super excited the same way about clean energy you can talk a little bit more Prince about that but India is one of the fewer places where the math on clean energy just works there’s growth, so there’s tremendous demand, lots of solar wind potential tremendous research going on in battery long duration storage, good policies and then the biggest issue what we’ve seen in the US is grid but they’re trying to build a high frequency grid which is fabulous, which then you bring in the innovation on that layer and that’s the unblock and if only you could solve permitting issues then you’re solving the whole stack that’s the excitement, it’s where the math works where the business case works, where you’ve got the talent and the innovation potential and then you also have the users

Ujjwal Kumar

wow, that’s amazing I do understand now thank you Thanks. So moving from that side, we heard about the demand side, and I’d love to take the insights from Prince, who is actually building the digital public infrastructure for the power sector, and they have been doing some incredible work, which I’ve seen in the past few weeks, particularly about P2P trading. And Prince, with all the initiatives about grid reforms and the trading platform which you are launching at this summit, how do you think the AI’s energy demand is going? How are you supporting it? Your insights.

Prince Dhawan

Thank you. Thank you, Joel. Thank you, everybody, for being here this morning. Let me first start by putting the AI. Thank you. compute demand in context. I honestly, resonating from what Duan had also said in his remarks, I feel that AI essentially will not scale unless your power is programmable. Okay, and that is, so the AI, I would say, I don’t want to call it race, but the AI build will depend a lot on, not on chips, as we might think. We do have the capacity and capability world over to solve that problem. But I think the binding constraint would be grids. It would be how intelligent and resilient your grids are. And I believe that is where, what is going to define the development of most of the compute infrastructure.

Now, what, India has essentially started doing is, we are redefining the architecture. Okay, so we are redefining how we view the grid. India already has one nation, one grid, which is essentially meaning one frequency. And now we are also having one digital interoperable layer that is being brought in by the India energy stack. So what does this mean? What does the stack essentially do? So the India energy stack, it basically creates the interoperable rails for systems to interact with each other. If you have a data center, it is not just creating high demand, but it is creating high peak demand that needs the grid to respond. And that is where you need coordination at scale.

So what is going to be scarce in the times to come is not electrification, as Roshani said. We have enough math works when you talk about solar power, when you talk about wind, even hydro. So that is where the math does work. But what needs to be ensured is coordination at scale. And that is what the India Energy Stack is essentially doing by laying down those foundational building blocks. Now, what we started with was a first showcase of how you can use the stack to essentially source energy from distributed energy resources like the solar rooftop panels, which we have on top of our households. We can literally transact in energy the same way that we transact using GPay, UPI payments.

Or using other such applications, Paytm, PhonePay, etc. So similarly, just imagine. that the data center, instead of relying on long -term PPAs and then hoping that the grid will deliver, can essentially source its power from millions of such distributed rooftop assets dynamically at scale. Just imagine the power of that happening. So it can literally be generating livelihoods for a lot of people who may not even be in geographical proximity to the data center. So individual retail households can essentially monetize their rooftop solar power by supplying to such data centers. How does the stack enable it? The stack lays down standard rules for measurement, identification, and settlement all in near real -time. So that’s how the architecture of the grid itself is changing.

let me the grid evolves generally in decades as to one said we have not invested heavily in the grids world over might be that China is an exception there but India has started doing the plumbing work it has started doing the hard work on that layer and generally AI evolves in quarters but the grid would evolve in decades how would you keep pace right and so that is where the India energy stack comes in where we push that development frontier and we enable people to talk to each other on the grid so AI would need not just electrons not just chips not just electrons it actually needs intelligent electrons and that is where the India energy stack sits in I think that should be in the times to come one of another reasons beyond economics that companies like Google or other companies would take bets on India.

And let me also tell you, you did recount Sundar’s message about $15 billion, but there’s also Reliance’s message about a trillion dollars in the next seven years. So let’s not forget that as well. I’m just putting stuff in context.

Ujjwal Kumar

Two minutes to one. li

Tuan Ho

ke India. And by the way, India is very, very well represented in Cambridge, which is how I probably met half the people on this panel. But it really does create these global scale opportunities to reinvent, to create this other, to support this other industrial revolution beyond just what the AI and the intelligence is allowing us to do. T

Ujjwal Kumar

hanks, Juan. Yeah, that’s exciting. Now with that, we want to move to Dr. Tobias. We have been talking about physical layer infrastructure. He has been working in semiconductor innovation since last 20 years, building it across US, Europe, India. I wanted to ask him, what does the next innovation looks like to you? Or what are you working on at this point? Where are you placing? your bet.

Tobias Helbig

Yeah, thank you very much for the question. Thank you so much for having me here. It’s great. And I would like to build a little bit, Jeff, on what you said earlier where you had this, are we on the right track? What the heck are we doing? And let’s zoom out for a moment. 1942, the head of IBM made a statement. There’s a world market for about five computers. And he was right, given the kind of computers he was looking at. We know better now, some years later, there’s laptops, PCs, there’s mobile phones, there’s basically a computer in every device. There’s billions of computers. Now what we discuss is, hey, AI, huge disruption. Power hungry like hell.

Shall we build some new computers? Shall we build power plants? Or how do we run it with renewable energy? and I get this nagging feeling is this really it or are we missing what came after these five computers in what we’re discussing if I take benchmarks like here’s my brain and it takes 20 watts there’s a fly which is a pretty agile intelligent robot below a milliwatt there’s something there’s something which is going to happen which is different than what we’re discussing here at the moment and that is what’s driving us as a semiconductor company in building on what starts now and driving it out into the real world so we today have products where on whatever 10 watts or so you can run very meaningful LLMs you can interact, you can drive the intelligent into the edge, into our real world that goes hand in hand with what’s happening around here moving from, hey, I can perceive something, is it a dog, a cat, to I can think, generative AI, I can create something out of those models to the point that I can create agents, stuff which acts on my behalf out there in the real field, which drives the intelligence and this disruption you’re looking at here at the moment and which is driving all these conversations, drives it close to us into the real world to the point that these devices, these robots, these whatever you want to call it, they’ll be able to learn.

So what we discuss here, and this is a huge challenge, I totally agree with all statements made before, we’ll see a next phase. It will see this moving into the real world, moving close to me, moving into autonomous systems, which ultimately change my life and change industries. And there is this second wave building up and my expectation, to some extent my hope, is from now on, where we sit as a company, is that this huge thing you’re already discussing with data centers is the five computers. And what is coming is these billions of edge devices which we will also see in the AI space. And just giving an example, I’m running marathons with a watch with me.

I charged it before I left in Germany and it still stays 12 days battery power. And there’s a lot of intelligence in that watch. This is where we are going. So the one is feed the beast and make it happen. The second is avoid that the beast is hungry and look at totally different models which will come in the next phase. Thank you.

Ujjwal Kumar

This is very interesting. Now we are talking about taking AI out of data center now. Any comments from the fellow panelists?

Jeff Binder

I agree with him. I think that the IBM analogies are very good. Very good one. I think we are all focused on the core and centralization. And as we’ve seen in many markets, they move from centralization to decentralization to hybrid approaches. And so that’s, I think, an incredibly astute observation. I do think edge devices ultimately have to be the core component in the full proliferation of AI. And so that means that, you know, as he said, small amounts of power can generate lots of value. It doesn’t necessarily have to be tokens in the center of a data center. So that goes to my concern that I think that – and look, I think all of the resources that are being built will eventually be consumed.

That’s not a – that’s a given. It’s a question of when and what – on what ROI they’ll deliver as they’re being – being consumed and used. And I think that’s a huge risk because – agents at the edge, which are probably going to end up being in the end a much more likely modality a decade from now. And it’ll be interesting to watch for sure.

Prince Dhawan

Okay. I do have a small – and I completely agree, actually. That’s truly, as Jeff said, it was an absolutely astute observation. But you know where you can see this being played out in practice even today? And that’s when you talk about finance. Okay. So finance world knows this. So today, because I work for a non -banking financial company, and one of our main products is infrastructure financing, where data centers are a product that we finance. And Roshali was in a panel discussion that spoke about the trifecta of AI energy and finance. But you know the finance bros, they have figured it out because today if you go for financing of a data center, you won’t get debt financing for GPUs.

GPUs are mostly financed by equity because there is obsolescence risk in GPUs. You would get debt financing for the brake motor, maybe even for sourcing power, but you won’t get debt financing for GPUs. And there you have it because they are seeing the big picture being played out there. So completely on board there, yes.

Ujjwal Kumar

Again, Rukhsani has been…

Vrushali Gaud

No, no, no, I’m good.

Ujjwal Kumar

No, no, no. We want to hear from you. Please, go ahead.

Vrushali Gaud

No, I think the risk of strata assets, the way you said, and the ROI is real, like in a sense of where you’re investing and what. But I think your point is very astute. There are portions of this will be obsolete. There are portions of this which will be very easily replaced, whether it’s on the chip side or whether how you write the programs. Even the models, right? You went from large scale, smaller. How do you build them? but also what I’m hoping is the bets on some of the hard infrastructure are just good things to do like I think to me the fact that we are seeing a transition to renewables or seeing a transition of the grids being operated in a better way, some of the boring bits that people didn’t pay attention to is how do you run things efficiently those I think are good pieces of this and then it goes to you right size it, we’ll get over the FOMO and the extra investments and it’ll probably get right sized into where in the stack you really want to invest with the ROI

Ujjwal Kumar

Thank you, so with this I’d like to take it a little bit more deeper, like we spoke about some of the opportunities we agree on something we may not on some Tuan, you invest early on founders I wanted to check with you and understand with you how do you, can you tell us that is there a mismatch between what is getting funded and what needs to be funded?

Tuan Ho

That’s a good question. Is there a mismatch between what’s getting funded and what needs to be funded? Well, I mean, probably. Yeah, I mean, I think, well, okay, going to the theme of this, I think there is more likely to be a mismatch between what is getting funded in the sort of like the pure AI world, if we’re talking about the foundational models. I think, Jeff, I think you had made this comment a little bit earlier. You look at a lot of the AI companies out there, and it’s a little bit like the dot -com era where you’ll see 100 companies, and the reality is that in five years, there will be five of them that are left.

I think one reason why I like focusing on infrastructure -type businesses is because I think… I think there’s more durability. and clarity to exactly what the problems are that you’re trying to solve. I mean, every great startup begins with a really well -understood problem and a product, what they call a product -market fit, like a founder that’s able to build a great solution to that problem that has some sort of market validation in need. And what I find really exciting about infrastructure businesses is I think the problems are a lot clearer in terms of what you’re trying to solve. To your point, there’s a lot more risk in the GPUs. There’s a lot more risk in the models that you’re building, that you’re building around them.

And the reality, too, is that those things are also going to change a lot faster. I mean, if you look at a data center, as an example, a data center ultimately is a giant box that provides a lot of power at scale and it needs to be able to efficiently… efficiently cool what’s inside it. of it. In terms of what GPUs or compute you put inside, I mean, that can change over many, many generations. But the utility of the infrastructure you’ve built there will always have value. So, I don’t know if that answers your question.

Jeff Binder

To add to that, I think that if you look at the dot -com era, measuring, with the exception of hardware companies, which were in switches like Cisco and other players, it was very difficult to determine whether a product was good or not. For those who remember MySpace before Facebook, it looked like MySpace was going to own the social media space. Of course, most, half the people in here probably don’t even know who MySpace is. It’s much different now. There’s a measurability component in all aspects of AI that didn’t exist in the dot -com era. You know, you had commerce platforms, but it wasn’t clear what made one commerce platform better than another. The consumer would ultimately decide that over time and through iterations.

And if you remember, Amazon for a long time was known for one -click ordering. Well, none of us really want to do that because we don’t want to make a mistake and find out that we bought the wrong thing. I think now it’s different. Almost every aspect of artificial intelligence deployment from the foundational aspects all the way to the top of the stack are measurable. And so that’s going to make the success and failure of businesses much more clear, much sooner than it was in the case of the Doctomer. And I think that’s going to be ultimately the element that shakes out companies very quickly. And then to the point about obsolescence and GPUs, we don’t know what the hardware roadmaps look like, even inside of a Google or a Jensen’s company, NVIDIA.

Or somebody else that’s out there. And power, which is the fundamental thing I think we’re talking about foundationally. can be grossly disrupted by those advances because if somebody has a breakthrough on chip design that’s now 10 or 50 or 100x what somebody else deployed, their data center is now almost instantly, at least from a financing perspective, obsolete. And so that’s a huge danger, I think, for investors in those foundational areas.

Ujjwal Kumar

Thank you. Dr. Tobias, you have also been involved in the innovation ecosystem very strongly. What is your take on this? What are you seeing because you are also involved in India? I’ve seen your company running hackathons and competitions. I’d love to know more from you.

Tobias Helbig

Adding to what just was discussed, we have a tendency to overestimate the next two years and impact and underestimate what’s happening in 10 years. And at the moment, we are going into this with huge bang. which even maybe have these ups and downs things even much bigger. From my perspective, all what we are discussing on AI is absolutely real. This is a huge disruption. This is changing industries. This is changing lives. This is changing professions. Wherever there is data, there is change. And that in the end is driving what we are doing by developing the products we have, which is semiconductors products, by being in India for that since literally decades. Development centers on our DNA history as a company of Motorola, Freescale, NXP, here in the Noida, Delhi region, in Bangalore, and so on.

So very much working on that. And on your question from an innovation perspective, well, we all know the hype cycle. And that’s tough. Because it always means that there is disruption. And there is a trap of disillusionment. And we’ve seen it. for all major breakthroughs, especially when they are being hyped up like hell. There’s the self -driving cars. There’s other things. In the end, these things get real. They have the substance. They happen, and they transform things. And AI will. The way there, and also in the question, hey, what’s the risk? What’s the bad, and what’s coming from the sidelines? I think we will see still troughs of disillusionments and surprises. There was one some while ago when this wave had a deep -seating moment.

Such moments will come again. And there will be a recovery from that, I’m also sure. So I’m in innovation since literally decades. I love it. It’s a roller coaster. We overestimate, we get shocked, and we get it right.

Ujjwal Kumar

Thank you. With that, I’ll go to Sari. I was very excited when I saw Google Logo launching Google Climate Technology Center. Okay. And I would like you to quickly give your insights, like what is it about and what would the innovators be looking for it?

Vrushali Gaud

Yes, thank you. So super excited. This week we announced in partnership with the Office of Principal Scientific Advisory for the Government of India, Google’s Center for Climate Tech. So there’s a couple, you know, how did we get here? Because that’s interesting to you is we see a lot of innovation. I live in Silicon Valley. I was raised in India, across back and forth. I’ve lived across the world. There’s different innovation which comes from big institutes, big academic settings, big companies. But there’s also innovation that comes from different corners of the world. What we loved about the PSA philosophy was they’re trying to get more Tier 2 and Tier 3 cities and also a wider spread of universities and academia that can get involved in this.

So, you know, besides your premier one. So that was very enticing. The other thing is, how do we take innovation down to the root? which I think also helps with some of the hype cycle because you’re making it local and you’re also making contextual to where those cities are. So with that in mind, what our center is looking for, we have a couple big pillars. One is skilling. We think there’s green skilling. A lot of focus on AI skilling, but in terms of green skills, which are decarbonization, clean energy, in terms of just we are looking at materials, chemistry, there’s a lot of new things in those spaces which haven’t been brought into college curriculum or university curriculum.

So we want to build upon that. A lot of the construction and investments are happening in tier two cities, so we think it’s a great way to get a more diverse pool skill in that. So that’s number one pillar. The second one is low carbon materials. So you go to embodied carbon, something you’re all very passionate about. So how do you drive innovation in construction, which is going to be huge? And again, it’s not just data centers. What you learn from data centers can be for real estate, for commercial buildings. So it’s to do with low -carbon steel, low -carbon cement, and low -carbon materials as you see them go through that construction cycle. And the third one we have looked at is right now that we have is sustainable aviation fuel, which is a little different from data centers.

It’s not that, but I think it’s like a good growing area, which we are, again, one of the philosophies we have is where can we find first -of -a -kind pilots and places where we can build, bring the Google brand and innovation. And we think sustainable aviation fuel in a growing country that is like now has one of the fastest growing airports and air traffic, that would be a good one too. And our hope is as we go through this, we are trying to see very outcomes -based, so not pure research, but pilots and actual update.

Ujjwal Kumar

Thank you. Very quickly, Tuan, do you have any closing 30 seconds?

Tuan Ho

I was going to say there, one thing that I don’t think we had a chance to do. We didn’t discuss as much, but it is important. especially as we’re at an event like this is government financing. I think what’s really another thing that’s been really exciting about this is having a tech conference like this where you have the Prime Minister and multiple heads of states coming from around the world to say like, these are things that we need to invest in, these are things that we need to support, is I think from a tech VC side of things something that we’re not used to seeing. But I also think it’s very exciting. Both in the United States you’re seeing hundreds of billions, hundreds of billions of dollars being invested by the federal government into infrastructure.

And you’re seeing similar investments being made in countries like India, countries outside of, China’s been doing this for a while, but you’re seeing this happen around the world. And so yeah, I think, I mean, where are we right now? There is the . industrial, like I said at the beginning, there’s the industrial revolution that AI is ushering in, but there’s also the industrial revolution that the requirements of AI are also going to require or is also going to usher in. So I think it’s going to be a bright future for us all.

Ujjwal Kumar

Thank you. I think that’s a great closing for us, and I enjoyed talking to all of you. I really had so much fun. Thanks, and your insights are amazing. Hopefully the innovators looking here, they got something out of it, and we’ll see some new people coming to all of us doing the innovations. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (16)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedmedium

“Tuan Ho is a unicorn founder‑turned‑venture‑capitalist and partner at X Fund.”

The knowledge base lists Tuan Ho as a unicorn founder turned venture capitalist and Partner at X Fund, confirming his role.

Confirmedhigh

“Jensen said at Davos that the AI infrastructure build‑out is “the largest infrastructure build‑out in human history.””

A source records Jensen’s Davos comment describing the AI infrastructure build‑out as the largest in human history.

Confirmedhigh

“The FORGE framework was launched by 54 countries as a global framework for AI‑critical minerals.”

The knowledge base notes that 54 countries launched FORGE, the first global framework for the minerals that power AI.

Confirmedhigh

“Google committed US$15 billion to a gigawatt‑scale AI hub in Vizag, announced alongside four new US‑India subsea cables.”

The source reports Sundar Pichai’s announcement of Google’s $15 billion gigawatt‑scale AI hub in India, confirming the investment figure and location.

Confirmedhigh

“More than 90 % of rare‑earth magnets currently flow through China, creating a strategic vulnerability for the United States.”

The knowledge base highlights a 90 % dependence on China for critical minerals, describing it as a strategic vulnerability.

Additional Contextlow

“Power‑grid upgrades that have been untouched for a century represent “low‑hanging‑fruit” infrastructure problems and immediate investment opportunities.”

The concept of “low‑hanging‑fruit” infrastructure opportunities is mentioned in the knowledge base, providing broader context for such investment themes.

External Sources (86)
S1
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Participant** – (Role/title not specified – appears to be Dr. Esther Yarmitsky based on context)
S2
Leaders TalkX: Moral pixels: painting an ethical landscape in the information society — – **Participant**: Role/Title: Not specified, Area of expertise: Not specified
S3
Leaders TalkX: ICT application to unlock the full potential of digital – Part II — – **Participant**: Role/Title not specified, Area of expertise not specified
S4
https://dig.watch/event/india-ai-impact-summit-2026/the-innovation-beneath-ai-the-us-india-partnership-powering-the-ai-era — Dr. Tobias Helbig, V.I. and VP of Innovation at NXP Semiconductors. Ujwal, over to you. We’ll start with a quick picture…
S6
Employing AI for consumer grievance redressal mechanisms in e-commerce (CUTS) — Moreover, there has been a decline in priority given to consumer-driven businesses in some countries, particularly in In…
S7
Diplo @ UNCTAD eWeek — Moderator:Ujjwal Kumar,Associate Director, CUTS International
S8
The Innovation Beneath AI: The US-India Partnership powering the AI Era — -Jeff Binder- Serial entrepreneur with multiple Fortune 500 exits, Harvard Venture Partners
S9
Building Climate-Resilient Systems with AI — -Vrushali Gaud- Global Director of Climate Operations at Google, leads Google’s decarbonization, water and circularity s…
S10
https://dig.watch/event/india-ai-impact-summit-2026/building-climate-resilient-systems-with-ai — And so that’s data centers. That’s the way you operate that. That’s the networks that feed into all of the applications….
S11
The Innovation Beneath AI: The US-India Partnership powering the AI Era — -Vrushali Gaud- Global Director of Climate Operations at Google
S12
The Innovation Beneath AI: The US-India Partnership powering the AI Era — -Prince Dhawan- IAS officer, Executive Director at REC Limited under the Ministry of Power
S13
https://dig.watch/event/india-ai-impact-summit-2026/the-innovation-beneath-ai-the-us-india-partnership-powering-the-ai-era — Thank you. Thank you. Thank you. this infrastructure right now and closing the gap between commitments and capacity. Thi…
S14
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Tuan Ho, a unicorn founder turned venture capitalist at X Fund, provided crucial insights into the strategic vulnerabili…
S15
The Geopolitics of Materials: Critical Mineral Supply Chains and Global Competition — And it says it takes 10 years to have a permit. I thought there must be something wrong. So I asked my team, there must …
S16
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — Aubra Anthony: Yeah, thanks, Yuping. And, yeah, a very auspicious time, really. I mentioned earlier some of the issues t…
S17
AI as critical infrastructure for continuity in public services — “I believe that there is perhaps awareness challenge as well as the capacity challenge, because I think that this whole …
S18
Securing access to financing to digital startups and fast growing small businesses in developing countries ( MFUG Innovation Partners) — A founder highlighted the importance of capital injection for achieving sustainable profitability in startups. They ment…
S19
Contents — We advocate regulatory exemptions for gigabit networks beyond the case of co-investments. However, it must be ensured th…
S20
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Hemant Taneja General Catalyst — Taneja argued that India is uniquely positioned to lead in AI deployment due to its status as the world’s strongest grow…
S21
Driving Indias AI Future Growth Innovation and Impact — But you must be aware that, you know, this game is actually, I mean, if you see my context, I mean, I have four diamonds…
S22
Keynote-Roy Jakobs — This comment introduces a systems-thinking perspective that acknowledges the complexity of AI implementation beyond just…
S23
The Future of Innovation and Entrepreneurship in the AI Era: A World Economic Forum Panel Discussion — The World Economic Forum discussion revealed that while AI is transforming entrepreneurship by reducing barriers and ena…
S24
Building Trustworthy AI Foundations and Practical Pathways — “But similarly now, econ of maybe writing novels is gone.”[20]. “The movie industry is worried.”[21]. “That entire econo…
S25
Keynote by Uday Shankar Vice Chairman_JioStar India — This observation is profound because it identifies how AI fundamentally changes the rules of competition in media. It su…
S26
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — And we want to invest in infrastructure. Infrastructure is a great investment opportunity. This is the single largest i…
S27
Ready for Goodbyes? : Critical System Obsolescence — Adaptability emerges as the best approach to embrace these changes. Being flexible and adaptive is crucial in navigating…
S28
Embracing the future of e-commerce and AI now (WEF) — Another key argument is the need for fair competition in the market and the importance of keeping administrative costs f…
S29
AI in justice: Bridging the global access gap or deepening inequalities — At least5 billion people worldwide lackaccess to justice, a human right enshrined in international law. In many regions,…
S30
World Economic Forum Panel Discussion: Global Economic Growth in the Age of AI — Unlocking AI’s potential requires accelerated energy infrastructure development while protecting affordability for consu…
S31
G20 Contributions on Digital Economy and Digitalization for Development (Indonesia) — Developed countries could provide this as low-hanging fruit
S32
Manufacturing’s Moonshots Are Landing . . . Are You Ready for the Next Wave? — In conclusion, the analysis highlights the potential challenges posed by geopolitical issues and emerging sustainability…
S33
Figure I: The Global Risks Landscape 2019 — Climate change has driven significant change in the world’s infrastructure needs since our 2010 report. There is now mor…
S34
AI and Data Driving India’s Energy Transformation for Climate Solutions — And I just want to make one quick point here. Without doing this kind of measurement I might be able to look at energy f…
S35
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — This comment shifted the discussion from problem identification to solution positioning, introducing geopolitical and ec…
S36
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Giordano Albertazzi — Albertazzi positioned India as central to the AI evolution, citing several key advantages that make the country particul…
S37
Designing Indias Digital Future AI at the Core 6G at the Edge — Power consumption concerns are driving data centers toward edge deployment
S38
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Artificial intelligence | Information and communication technologies for development Arun advocates for moving inferenc…
S39
Google’s AI data centre in Saudi Arabia raises climate concerns — Google has announced plans to open a new AI-focused data centre in Saudi Arabia, aligning with Saudi Arabia’s Public Inv…
S40
Informal Stakeholder Consultation Session — And we truly believe that a transformative digital economy can only be achieved if it is built on the principle of envir…
S41
The Global Power Shift India’s Rise in AI &amp; Semiconductors — So I think this is a great area for public -private partnership, in my view. The public part of it is a uniquely governm…
S42
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Jensen at Davos called this the largest infrastructure build -out in human history. Two weeks ago, 54 countries launched…
S43
Keynote Adresses at India AI Impact Summit 2026 — The remarkable consensus among speakers from both government and private sector suggests strong bilateral alignment on t…
S44
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — Zhang and Professor Gong Ke agreed on the fundamental importance of infrastructure development for AI advancement. Their…
S45
World Economic Forum Panel Discussion: Global Economic Growth in the Age of AI — Energy Infrastructure and Affordability Concerns Infrastructure | Economic Unlocking AI’s potential requires accelerat…
S46
From KW to GW Scaling the Infrastructure of the Global AI Economy — The infrastructure demands represent a fundamental shift from traditional data centre design. The speakers noted that wh…
S47
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — All speakers agree that the U.S.-India partnership represents a natural, mutually beneficial collaboration based on comp…
S48
Press Conference: Closing the AI Access Gap — An important aspect of the alliance’s work is the creation of relevant international frameworks and public-private partn…
S49
Democratizing AI Building Trustworthy Systems for Everyone — Private sector investment is necessary due to the scale of infrastructure needs that cannot be met by governments alone
S50
AI as critical infrastructure for continuity in public services — “I believe that there is perhaps awareness challenge as well as the capacity challenge, because I think that this whole …
S51
Main Session on Sustainability &amp; Environment | IGF 2023 — The analysis also underscores the importance of policymakers having up-to-date information for evidence-based decisions….
S52
Indias Roadmap to an AGI-Enabled Future — -Energy Infrastructure for AI: Discussion of India’s massive energy requirements for AI data centers, with visibility of…
S53
The Role of Science, Technology and Innovation Policies to Foster the Implementation of the Sustainable Development Goals (SDGs) — – a) interdependencies between SDGs , with the aim to identify both critical trade-offs between policies aimed at ach…
S54
Designing Indias Digital Future AI at the Core 6G at the Edge — Power consumption concerns are driving data centers toward edge deployment Roy emphasizes that infrastructure challenge…
S55
Strategic Action Plan for Artificial Intelligence — In mobile edge computing, the AI application does have its own computing power on board to process data, or the computin…
S56
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Giordano Albertazzi — Albertazzi positioned India as central to the AI evolution, citing several key advantages that make the country particul…
S57
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — This comment shifted the discussion from problem identification to solution positioning, introducing geopolitical and ec…
S58
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Economic | Infrastructure | Development Need for blended financing approaches combining government, private sector, and…
S59
UN Secretary-General report outlines voluntary financing options for AI capacity building — The UN Secretary-General has issued areport onInnovative Voluntary Financing Options for Artificial Intelligence Capacit…
S60
WS #462 Bridging the Compute Divide a Global Alliance for AI — – Ivy Lau-Schindewolf Barriers to Equitable Access to Computational Power Development | Economic | Infrastructure Rol…
S61
The Innovation Beneath AI: The US-India Partnership powering the AI Era — The panel opened with Kumar’s observation that whilst AI models receive significant attention, the underlying infrastruc…
S62
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — The speakers demonstrated strong consensus on AI infrastructure representing unprecedented investment opportunities. The…
S63
The Global Power Shift India’s Rise in AI &amp; Semiconductors — -Public-Private Partnership Models and Capital Requirements: The discussion highlighted the need for substantial capital…
S64
The Geoeconomics of Energy and Materials/ DAVOS 2025 — Critical Minerals and Mining: Fatih Birol: First about energy transition means different things for different parts of…
S65
https://dig.watch/event/india-ai-impact-summit-2026/the-innovation-beneath-ai-the-us-india-partnership-powering-the-ai-era — And that makes it more difficult, not less difficult, I think, to be an investor because you have more mature products. …
S66
HIGH LEVEL LEADERS SESSION IV — There’s a risk of overspending on innovations that may not provide the expected benefits.
S67
G20 Contributions on Digital Economy and Digitalization for Development (Indonesia) — Developed countries could provide this as low-hanging fruit
S68
Hype Cycles and Start-ups — Founders and CEOs play a crucial role in navigating the hype cycle by staying grounded and maintaining proximity to the …
S69
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Giordano Albertazzi — Albertazzi positioned India as central to the AI evolution, citing several key advantages that make the country particul…
S70
Designing Indias Digital Future AI at the Core 6G at the Edge — This distributed approach addresses multiple challenges simultaneously, reducing latency for time-critical applications …
S71
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — India possesses many essential ingredients for AI success: a robust software services industry, thriving startup ecosyst…
S72
AI and Data Driving India’s Energy Transformation for Climate Solutions — A very important question indeed. When in the public policy, the equity is extremely important. And equity means the ent…
S73
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Collaboration across sectors, robust governance, and strategic investments will be critical in achieving a sustainable a…
S74
Google’s AI data centre in Saudi Arabia raises climate concerns — Google has announced plans to open a new AI-focused data centre in Saudi Arabia, aligning with Saudi Arabia’s Public Inv…
S75
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — An IT architecture, so that each distribution company in India, but for that, that matter anywhere in the world will kno…
S76
Building Climate-Resilient Systems with AI — Google’s representatives, Vrushali Gaud and Spencer Low, detailed how major technology companies are addressing the dual…
S77
Quantum Technologies: Navigating the Path from Promise to Practice — And it was only because of some very deep thinkers around that time who started thinking about quantum computing. One of…
S78
Building the Workforce_ AI for Viksit Bharat 2047 — We know we have 5 .8 million professionals. For example, the Tata AI Saki Immersion Programme is empowering rural women …
S79
AI Infrastructure and Future Development: A Panel Discussion — -Cost Reduction and Efficiency Breakthroughs: The discussion addressed dramatic cost reductions in AI (from $33 to $0.09…
S80
Day 0 Event #270 Everything in the Cloud How to Remain Digital Autonomous — While infrastructure is critical, excessive focus on this layer overlooks significant innovation occurring in foundation…
S81
Workshop 9: Between Green Ambitions and Geopolitical Realities: EU’s Critical Raw Materials Act — – **Hamid Pouran** – Dr., Senior member of IEEE, Working group member on energy and environment, Lecturer on environment…
S82
From chips to jobs: Huang’s vision for AI at Davos 2026 — AIis evolvinginto a foundational economic system rather than a standalone technology, according to NVIDIA chief executiv…
S83
World Economic Forum Panel: Sovereignty and Interconnectedness in the Modern Economy — The 90% dependence on China for critical minerals represents a strategic vulnerability that requires coordinated allied …
S84
https://dig.watch/event/india-ai-impact-summit-2026/the-global-power-shift-indias-rise-in-ai-semiconductors — So the goal of Genesis Project is to really, one, align public and private partnership, two, invest government resources…
S85
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-ai-in-healthcare-india-ai-impact-summit — I think that shouldn’t be so, right? And coming back, that is where I think it would be great to introduce Dr. Aditya Ya…
S86
Main Session | Best Practice Forum on Cybersecurity — Oktavía Hrund G Jóns: Thank you so much, Dino. I would like to see, am I audible? You can hear me? Yes, you are. Fa…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
T
Tuan Ho
3 arguments149 words per minute1310 words525 seconds
Argument 1
Critical minerals supply chain vulnerability and need for US‑India collaboration (Tuan Ho)
EXPLANATION
Tuan Ho warns that the United States relies on over 90% of rare‑earth magnets from China, creating a strategic vulnerability. He argues that a US‑India partnership is essential to secure the supply chain for AI‑related hardware.
EVIDENCE
He noted that more than 90% of rare-earth magnets are sourced from China, creating a strategic vulnerability for the United States, and emphasized the need for a US-India partnership to secure supply chains, referencing his investment in Vulcan Elements and the opportunity to build infrastructure (see [29-33] and [55-57]).
MAJOR DISCUSSION POINT
Critical minerals supply chain
DISAGREED WITH
Jeff Binder, Ujjwal Kumar, Vrushali Gaud
Argument 2
Mismatch between AI model funding and essential infrastructure needs; importance of government financing (Tuan Ho)
EXPLANATION
Tuan Ho observes that current investment is heavily skewed toward pure AI model development, while the foundational infrastructure—energy, grids, and minerals—remains under‑funded. He stresses that government financing is crucial to bridge this gap.
EVIDENCE
He argued that funding is currently focused on pure AI models, leaving essential infrastructure like energy grids and mineral supply under-funded, and highlighted the role of government financing in addressing this mismatch (see [269-276]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S5 points out a mismatch between AI model funding and infrastructure needs, and S16 notes reduced government and donor financing for such projects.
MAJOR DISCUSSION POINT
Funding mismatch
AGREED WITH
Jeff Binder, Ujjwal Kumar, Participant
DISAGREED WITH
Prince Dhawan, Jeff Binder
Argument 3
Early‑stage founders should target clear infrastructure problems (low‑hanging fruit) to achieve product‑market fit (Tuan Ho)
EXPLANATION
Tuan Ho points out that many industries, such as power grids, have not seen innovation for decades, presenting low‑hanging fruit for investors and founders. Targeting these well‑understood problems can lead to quicker product‑market fit.
EVIDENCE
He identified low-hanging fruit in under-innovated sectors like power grids, noting that many have not been upgraded for decades, which creates a clear opportunity for investors and founders to solve well-understood infrastructure challenges (see [52-55]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S18 describes the importance of capital injection for startups tackling infrastructure challenges, and S23 observes how AI tools enable rapid product development with lower capital.
MAJOR DISCUSSION POINT
Infrastructure opportunities for startups
U
Ujjwal Kumar
4 arguments64 words per minute916 words850 seconds
Argument 1
US‑India rare‑earth corridor and strategic investments highlighted (Ujjwal Kumar)
EXPLANATION
Ujjwal Kumar highlights the joint US‑India effort to build a rare‑earth minerals corridor, noting major commitments such as Google’s $15 bn investment, the FORGE framework, and new subsea cables, which together signal a historic infrastructure build‑out for AI.
EVIDENCE
He mentioned that the US and India are building together, referencing rare-earth corridors in India’s budget, Google’s $15 bn commitment, the launch of FORGE by 54 countries, and the announcement of a gigawatt-scale AI hub in Vizag along with four new subsea cables (see [17-22]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S5 reports the joint US‑India rare‑earth corridor and multibillion‑dollar commitments, and S20 emphasizes India’s strategic position for AI deployment.
MAJOR DISCUSSION POINT
US‑India AI minerals corridor
AGREED WITH
Tuan Ho, Tobias Helbig, Vrushali Gaud
Argument 2
Success depends on leveraging state‑of‑the‑art AI advances quickly; lagging behind leads to irrelevance (Ujjwal Kumar)
EXPLANATION
Ujjwal Kumar stresses that entrepreneurs must adopt the latest AI tools and capabilities rapidly, otherwise they risk becoming obsolete in a fast‑moving market.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S23 notes that entrepreneurs must adopt the latest AI tools or risk becoming irrelevant, reinforcing this point.
MAJOR DISCUSSION POINT
Speed of AI adoption
Argument 3
AI is driving creative destruction of traditional infrastructure sectors, requiring new approaches
EXPLANATION
Ujjwal points out that AI is fundamentally reshaping how core physical systems such as energy, semiconductors, critical minerals and data centres are built and operated. This creative destruction calls for fresh strategies and investments to keep pace with AI‑driven demand.
EVIDENCE
He stated that AI is forcing a creative destruction of how the world builds infrastructure, affecting energy, semiconductors, critical minerals, physical edge systems, and data centres (see [16]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S5 describes AI’s reshaping of energy, semiconductors, and critical minerals, and S26 calls the AI infrastructure build‑out the largest in history.
MAJOR DISCUSSION POINT
AI‑driven infrastructure transformation
AGREED WITH
Tuan Ho, Prince Dhawan, Vrushali Gaud
Argument 4
The AI infrastructure build‑out is the largest in human history, underscoring the unprecedented scale of investment needed
EXPLANATION
Ujjwal cites a comment from Jensen at Davos describing the current AI‑related infrastructure expansion as the biggest ever undertaken. This highlights the massive scale of resources and coordination required to support AI growth.
EVIDENCE
He quoted Jensen at Davos describing the AI-related build-out as the largest infrastructure build-out in human history, indicating the unprecedented scale of investment (see [19]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S26 explicitly states that the current AI‑related infrastructure expansion is the biggest ever undertaken.
MAJOR DISCUSSION POINT
Scale of AI infrastructure
J
Jeff Binder
4 arguments148 words per minute1352 words546 seconds
Argument 1
Risk of over‑building infrastructure, ROI challenges, and shifting financing dynamics (Jeff Binder)
EXPLANATION
Jeff warns that a massive over‑build of AI infrastructure could lead to poor returns on investment, as resources may become cheap and under‑utilized, creating financial risk for investors.
EVIDENCE
He warned of a potential over-build of AI infrastructure, predicting that within two years resources could become inexpensive and ROI challenges may arise, emphasizing the risk of mis-aligned investments (see [94-97]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S5 warns of potential over‑build and ROI challenges, and S26 underscores the massive scale of investment required for AI infrastructure.
MAJOR DISCUSSION POINT
Over‑build risk
AGREED WITH
Tuan Ho, Ujjwal Kumar, Participant
DISAGREED WITH
Ujjwal Kumar, Tuan Ho, Vrushali Gaud
Argument 2
Hardware obsolescence risk (e.g., GPUs) and the importance of adaptable semiconductor strategies (Jeff Binder)
EXPLANATION
Jeff notes that rapid advances in chip design can render existing GPU‑based data centers obsolete, making financing decisions more precarious and underscoring the need for flexible hardware strategies.
EVIDENCE
He highlighted that GPUs face obsolescence risk because breakthroughs in chip design could instantly make existing data-center hardware outdated, complicating financing decisions (see [94-97]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S5 notes that breakthroughs in chip design could render existing GPU‑based data centres obsolete, and S27 highlights the need for adaptability in such a fast‑changing hardware landscape.
MAJOR DISCUSSION POINT
GPU obsolescence
Argument 3
AI tools lower capital requirements, enabling faster market entry, but also increase competition and pressure to adopt cutting‑edge tech (Jeff Binder)
EXPLANATION
Jeff explains that AI tools give entrepreneurs a huge leverage, allowing them to bring products to market with a fraction of the capital previously needed, but this also intensifies competition and forces rapid adoption of the latest technologies.
EVIDENCE
He described how AI tools give entrepreneurs massive leverage, enabling them to launch products with a tenth of the usual capital and potentially reach revenue with a single small seed round, while also increasing competitive pressure to adopt cutting-edge tech (see [80-86]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S23 reports that AI tools dramatically reduce capital requirements for founders, while also intensifying competition.
MAJOR DISCUSSION POINT
AI‑driven capital efficiency
DISAGREED WITH
Tuan Ho, Prince Dhawan
Argument 4
AI will diminish cultural barriers in product development, allowing entrepreneurs to tap global front‑end talent more effectively
EXPLANATION
Jeff observes that cultural differences in front‑end development have historically hampered cross‑border collaboration. He argues that AI tools will reduce these frictions, enabling startups to leverage talent from India, China and other regions more seamlessly.
EVIDENCE
He noted that cultural differences in front-end development make cross-border collaboration difficult, but argued that AI will change this, enabling entrepreneurs to leverage talent from India, China and elsewhere more easily (see [71-79]).
MAJOR DISCUSSION POINT
Cross‑border talent and cultural barriers
T
Tobias Helbig
3 arguments150 words per minute874 words348 seconds
Argument 1
Hype‑cycle dynamics and long‑term impact underestimation affect investment decisions (Tobias Helbig)
EXPLANATION
Tobias argues that the industry tends to overestimate short‑term AI impact while underestimating its long‑term consequences, leading to cycles of hype, disillusionment, and eventual recovery.
EVIDENCE
He pointed out that industry often overestimates the next two years while underestimating the next ten, referencing IBM’s 1942 comment and current AI hype, and warned that this can cause cycles of disillusionment before recovery (see [308-313] and [319-326]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S5 discusses the tendency to overestimate short‑term AI impact while underestimating long‑term consequences.
MAJOR DISCUSSION POINT
Hype‑cycle and investment
Argument 2
Transition from data‑center‑centric AI to low‑power edge devices; need for new semiconductor designs (Tobias Helbig)
EXPLANATION
Tobias describes a shift toward billions of low‑power edge devices that run AI locally, requiring new semiconductor designs that differ from traditional data‑center hardware.
EVIDENCE
He described a move from data-center-centric AI to billions of edge devices, giving the example of a marathon-watch that runs 12 days on a charge, illustrating the need for new semiconductor designs for edge AI (see [218-227]).
MAJOR DISCUSSION POINT
Edge AI and semiconductors
DISAGREED WITH
Jeff Binder, Ujjwal Kumar
Argument 3
Building semiconductor R&D and manufacturing capacity in India, leveraging decades of local expertise, is critical to meet AI hardware demand
EXPLANATION
Tobias emphasizes that India hosts long‑standing semiconductor development centers in Noida, Delhi and Bangalore, reflecting deep technical expertise. Strengthening this capacity is essential for supplying the chips and sensors required by AI systems.
EVIDENCE
He mentioned that NXP’s development centers have been operating in Noida, Delhi and Bangalore for decades, reflecting a long-standing semiconductor expertise in India that can be leveraged for AI hardware (see [317-319]).
MAJOR DISCUSSION POINT
Semiconductor capacity in India
P
Prince Dhawan
2 arguments129 words per minute884 words410 seconds
Argument 1
AI scaling hinges on programmable, intelligent grids; India Energy Stack enables distributed energy trading for data centers (Prince Dhawan)
EXPLANATION
Prince asserts that AI’s growth depends on programmable, intelligent power grids, and explains how the India Energy Stack creates interoperable layers that allow data centers to source power from millions of distributed rooftop solar assets in near real‑time.
EVIDENCE
He explained that AI scaling requires programmable grids, described the India Energy Stack’s interoperable rails, and illustrated how data centers can dynamically source power from distributed rooftop solar panels, with measurement, identification, and settlement happening in near real-time (see [161-188]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S5 lists energy systems as a key component of AI scaling infrastructure, and S20 mentions India’s growth market supporting such developments.
MAJOR DISCUSSION POINT
Programmable grids for AI
AGREED WITH
Vrushali Gaud, Ujjwal Kumar, Tuan Ho
Argument 2
Private‑sector commitments such as Reliance’s trillion‑dollar AI infrastructure plan highlight the massive financial backing needed for India’s AI growth
EXPLANATION
Prince references a major pledge by Reliance to invest a trillion dollars over seven years, signalling that large private capital is being directed toward AI‑related infrastructure. This underscores the importance of private financing alongside public initiatives.
EVIDENCE
He referenced Reliance’s announcement of a trillion-dollar investment over the next seven years, underscoring the scale of private sector funding aimed at AI infrastructure in India (see [190-192]).
MAJOR DISCUSSION POINT
Private sector investment
V
Vrushali Gaud
4 arguments189 words per minute1506 words477 seconds
Argument 1
Google’s $15 bn India commitment driven by massive user base, growth market, and need for robust physical infrastructure (Vrushali Gaud)
EXPLANATION
Vrushali links Google’s $15 bn investment to India’s billion‑plus user base, rapid technology adoption, and the necessity for strong physical infrastructure such as data centers and subsea cables to support AI growth.
EVIDENCE
She connected Google’s $15 bn India commitment to the country’s billion-plus users, fast tech adoption, and the need for robust physical infrastructure, citing the subsea cable announcements and the scale of AI-related hardware requirements (see [140-146]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S5 records Google’s $15 bn investment in India as part of the AI infrastructure push.
MAJOR DISCUSSION POINT
Google’s India investment rationale
DISAGREED WITH
Jeff Binder, Ujjwal Kumar, Tuan Ho
Argument 2
Google’s Climate Tech Center focuses on green skilling, low‑carbon materials, and sustainable aviation fuel pilots (Vrushali Gaud)
EXPLANATION
Vrushali outlines the Climate Tech Center’s three pillars: building green skills for decarbonisation, developing low‑carbon construction materials, and piloting sustainable aviation fuel projects, all aimed at outcome‑based innovation.
EVIDENCE
She described the Center’s partnership with the Indian government, its focus on green skilling, low-carbon materials for construction, and pilots for sustainable aviation fuel, emphasizing outcome-based results (see [339-367]).
MAJOR DISCUSSION POINT
Climate Tech Center priorities
Argument 3
India’s renewable potential and clean‑energy policies make it a prime location for AI‑driven power demand (Vrushali Gaud)
EXPLANATION
Vrushali highlights India’s abundant solar and wind resources, supportive policies, and favorable economics, arguing that these factors make India an ideal hub for meeting AI’s growing energy needs.
EVIDENCE
She emphasized India’s large renewable potential, abundant solar and wind resources, supportive policies, and the favorable economics of clean-energy deployment, positioning the country as a prime location for AI-driven power demand (see [150-156]).
MAJOR DISCUSSION POINT
India’s clean‑energy advantage
AGREED WITH
Prince Dhawan, Ujjwal Kumar, Tuan Ho
Argument 4
Realizing AI’s potential requires developing the full stack—including data‑centre construction, network connectivity and energy systems—so that software advances can be effectively deployed
EXPLANATION
Vrushali stresses that AI success depends not only on models and applications but also on the underlying physical layer: robust data centres, high‑capacity networks (including subsea cables) and reliable, clean energy. Without these foundations, AI innovations cannot be scaled.
EVIDENCE
She described the AI stack as spanning software models to the foundational physical layer, including data-centre construction, network design and energy supply, and emphasized that without these physical components AI cannot be realized (see [109-121] and [131-139]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S5 emphasizes the need for the full physical AI stack (data centres, networks, energy), and S26 describes the unprecedented scale of the AI infrastructure build‑out.
MAJOR DISCUSSION POINT
Full AI stack development
P
Participant
1 argument56 words per minute156 words166 seconds
Argument 1
Closing the gap between infrastructure commitments and actual capacity is essential for AI development
EXPLANATION
The opening remarks stress that the existing infrastructure is insufficient and that a significant gap exists between what has been pledged and what is currently available. Bridging this gap is presented as a prerequisite for scaling AI initiatives.
EVIDENCE
The speaker highlighted that current infrastructure is insufficient and emphasized the need to close the gap between existing commitments and actual capacity (see [2]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S26 highlights the infrastructure gap as a key challenge for AI scaling, and S19 discusses regulatory and investment gaps in gigabit network deployment.
MAJOR DISCUSSION POINT
Infrastructure gap
Agreements
Agreement Points
Robust physical infrastructure (energy, grids, minerals) is essential for scaling AI.
Speakers: Ujjwal Kumar, Tuan Ho, Prince Dhawan, Vrushali Gaud
AI is driving creative destruction of traditional infrastructure sectors, requiring new approaches Critical minerals supply chain vulnerability and need for US–India collaboration (Tuan Ho) AI scaling hinges on programmable, intelligent grids; India Energy Stack enables distributed energy trading for data centers (Prince Dhawan) Realizing AI’s potential requires developing the full stack—including data‑centre construction, network connectivity and energy systems—so that software advances can be effectively deployed (Vrushali Gaud)
All speakers emphasized that AI growth depends on a solid physical foundation, including critical minerals, reliable power grids, and comprehensive data-centre and network infrastructure, and that without these the AI ecosystem cannot scale [16][19][29-33][43-46][161-188][109-121].
POLICY CONTEXT (KNOWLEDGE BASE)
The consensus that AI growth hinges on energy, grid and mineral supply chains is reflected in the public-private partnership emphasis for critical infrastructure [S41] and the World Economic Forum call for accelerated energy infrastructure and grid modernization to support AI [S45]; China’s AI strategy similarly foregrounds foundational infrastructure such as data centres and renewable energy systems [S44].
US‑India collaboration is pivotal for AI hardware supply chains and semiconductor capacity.
Speakers: Ujjwal Kumar, Tuan Ho, Tobias Helbig, Vrushali Gaud
US‑India rare‑earth corridor and strategic investments highlighted (Ujjwal Kumar) Critical minerals supply chain vulnerability and need for US–India collaboration (Tuan Ho) Building semiconductor R&D and manufacturing capacity in India, leveraging decades of local expertise, is critical to meet AI hardware demand (Tobias Helbig)
The panel highlighted the strategic importance of a US-India partnership for securing rare-earth supplies, expanding semiconductor R&D, and supporting AI infrastructure, underscoring India’s role in the global AI supply chain [17-22][55-57][317-319][140-146].
Government and public‑private financing are crucial to bridge the infrastructure gap for AI.
Speakers: Tuan Ho, Jeff Binder, Ujjwal Kumar, Participant
Mismatch between AI model funding and essential infrastructure needs; importance of government financing (Tuan Ho) Risk of over‑building infrastructure, ROI challenges, and shifting financing dynamics (Jeff Binder) Closing the gap between infrastructure commitments and actual capacity is essential for AI development (Participant)
Speakers concurred that substantial public and private investment, especially government-backed financing, is needed to close the gap between pledged AI infrastructure and actual capacity, and to avoid over-building risks [269-276][369-376][2].
India’s renewable energy potential makes it an ideal hub for AI‑driven power demand.
Speakers: Vrushali Gaud, Prince Dhawan, Ujjwal Kumar, Tuan Ho
India’s renewable potential and clean‑energy policies make it a prime location for AI‑driven power demand (Vrushali Gaud) AI scaling hinges on programmable, intelligent grids; India Energy Stack enables distributed energy trading for data centers (Prince Dhawan)
Multiple participants highlighted India’s abundant solar and wind resources, supportive policies, and the ability to integrate distributed renewable energy with AI workloads, positioning the country as a key market for AI energy needs [150-156][161-188][18][43-45].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s massive AI data-center power needs (16 GW and rising) and its renewable capacity are documented in the roadmap report on India’s AGI-enabled future [S52]; analysts highlight abundant power availability as a strategic advantage for AI infrastructure investment [S56]; the sustainability session underscores renewable integration for AI growth [S45].
Similar Viewpoints
Both emphasized that without coordinated government financing, AI infrastructure projects risk either under‑funding critical foundations or suffering from over‑investment and poor ROI [269-276][369-376].
Speakers: Tuan Ho, Jeff Binder
Mismatch between AI model funding and essential infrastructure needs; importance of government financing (Tuan Ho) Risk of over‑building infrastructure, ROI challenges, and shifting financing dynamics (Jeff Binder)
Both recognized a shift toward edge AI and the need to rethink hardware investments to avoid over‑building centralized data‑center capacity [218-227][237-240].
Speakers: Jeff Binder, Tobias Helbig
Transition from data‑center‑centric AI to low‑power edge devices; need for new semiconductor designs (Tobias Helbig) Risk of over‑building infrastructure, ROI challenges, and shifting financing dynamics (Jeff Binder)
Unexpected Consensus
Large private‑sector commitments from both Google and Reliance signal a coordinated push for AI infrastructure in India.
Speakers: Vrushali Gaud, Prince Dhawan
Google’s $15 bn India commitment driven by massive user base, growth market, and need for robust physical infrastructure (Vrushali Gaud) Private‑sector commitments such as Reliance’s trillion‑dollar AI infrastructure plan highlight the massive financial backing needed for India’s AI growth (Prince Dhawan)
It was unexpected that two distinct private entities-Google and Reliance-are each committing multibillion-dollar resources to AI infrastructure in India, indicating a converging private-sector confidence in the market’s potential [140-146][190-192].
POLICY CONTEXT (KNOWLEDGE BASE)
Google’s $15 billion investment announced at the AI Impact Summit illustrates large private-sector commitment [S42]; Reliance’s involvement is referenced in the broader private-sector alignment noted by summit speakers, positioning India as a hub for AI investment [S43][S56].
Overall Assessment

The panel showed strong consensus on the necessity of robust physical infrastructure, the strategic US‑India partnership, and the pivotal role of government and private financing to bridge the AI infrastructure gap, with particular emphasis on India’s renewable energy advantage and emerging edge AI trends.

High consensus across multiple speakers, suggesting coordinated policy and investment actions are likely to be pursued to support AI scaling.

Differences
Different Viewpoints
Risk of over‑building AI infrastructure versus viewing the build‑out as a historic opportunity
Speakers: Jeff Binder, Ujjwal Kumar, Tuan Ho, Vrushali Gaud
Risk of over‑building infrastructure, ROI challenges, and shifting financing dynamics (Jeff Binder) The AI infrastructure build‑out is the largest in human history, underscoring unprecedented scale of investment needed (Ujjwal Kumar) Critical minerals supply chain vulnerability and need for US‑India collaboration (Tuan Ho) Google’s $15 bn India commitment driven by massive user base, growth market, and need for robust physical infrastructure (Vrushali Gaud)
Jeff warns that a massive, rapid AI infrastructure build could lead to over-capacity and poor ROI, predicting resources will become cheap and under-utilised [94-97]. In contrast, Ujjwal, Tuan and Vrushali portray the same build-out as a historic, necessary opportunity – citing Jensen’s comment that it is the largest infrastructure build-out ever [19] and highlighting huge public-private commitments [17-22][55-57][140-146]. The speakers therefore disagree on whether the current pace of investment is prudent or excessive.
Primary source of financing for AI‑related infrastructure – government versus private/venture capital
Speakers: Tuan Ho, Prince Dhawan, Jeff Binder
Mismatch between AI model funding and essential infrastructure needs; importance of government financing (Tuan Ho) Private‑sector commitments such as Reliance’s trillion‑dollar AI infrastructure plan highlight the massive financial backing needed (Prince Dhawan) AI tools lower capital requirements, enabling faster market entry, but also increase competition and pressure to adopt cutting‑edge tech (Jeff Binder)
Tuan stresses that government financing is essential to bridge the gap between AI model funding and the under-funded infrastructure layer [269-276]. Prince points to huge private-sector pledges, notably Reliance’s trillion-dollar plan, as the engine for AI growth [190-192]. Jeff highlights how venture-capital dynamics are shifting, with AI tools allowing founders to launch with minimal seed capital, altering traditional financing models [80-86]. The three speakers therefore disagree on which financing mechanism should dominate the effort.
Strategic focus: data‑center‑centric AI versus a shift to billions of low‑power edge devices
Speakers: Tobias Helbig, Jeff Binder, Ujjwal Kumar
Transition from data‑center‑centric AI to low‑power edge devices; need for new semiconductor designs (Tobias Helbig) Resources will be consumed, focus on centralization moving to hybrid approaches (Jeff Binder) AI is forcing creative destruction of traditional infrastructure, including data centres (Ujjwal Kumar)
Tobias argues that the next wave of AI will move from large data-centres to billions of edge devices, requiring new low-power semiconductor designs [218-227]. Jeff, while acknowledging edge importance, emphasizes that current resources will be consumed and that the industry is moving from centralization to hybrid models, keeping data-centres central for now [235-237]. Ujjwal also stresses AI-driven creative destruction of existing infrastructure, focusing on the massive data-centre and grid build-out [16]. The panelists therefore diverge on where the primary investment and development focus should lie.
POLICY CONTEXT (KNOWLEDGE BASE)
Experts note a shift driven by power-consumption constraints, with data-center designs moving toward edge deployments due to energy concerns [S54] and mobile edge computing enabling on-device AI processing [S55]; at the same time, scaling AI infrastructure demands higher rack power densities, emphasizing data-center considerations [S46].
Unexpected Differences
Priority of programmable power grids versus front‑end product development challenges
Speakers: Prince Dhawan, Jeff Binder
AI scaling hinges on programmable, intelligent grids; India Energy Stack enables distributed energy trading for data centres (Prince Dhawan) AI will diminish cultural barriers in product development, allowing entrepreneurs to tap global front‑end talent more effectively (Jeff Binder)
Prince positions the power-grid as the binding constraint for AI growth, while Jeff focuses on overcoming cultural and front-end development barriers as the main hurdle. The two perspectives highlight very different bottlenecks-energy infrastructure versus software development-an unexpected divergence given the shared AI focus. [161-166][71-79]
Overall Assessment

The panel broadly concurs that AI scaling demands massive physical infrastructure and US‑India collaboration, but they diverge on three core fronts: (1) whether the current pace risks over‑building, (2) which financing model—government, private, or venture—should lead the effort, and (3) whether investment should stay data‑center‑centric or pivot to edge‑device ecosystems. These disagreements reflect differing risk assessments, funding philosophies, and technology road‑maps, suggesting that coordinated policy and investment strategies will be needed to reconcile optimism with caution.

Moderate to high – while there is consensus on the need for infrastructure, the panelists hold contrasting views on scale, financing sources, and strategic focus, which could affect the speed and sustainability of AI deployment in the region.

Partial Agreements
All speakers agree that robust physical infrastructure (minerals, grids, data‑centres, networks) is a prerequisite for scaling AI, but they differ on which layer should be prioritised – minerals and supply chains, programmable grids, or network/build‑out – to achieve the shared goal. [16][29-33][161-166][109-121][131-139]
Speakers: Ujjwal Kumar, Tuan Ho, Prince Dhawan, Vrushali Gaud
AI is driving creative destruction of traditional infrastructure sectors, requiring new approaches (Ujjwal Kumar) Critical minerals supply chain vulnerability and need for US‑India collaboration (Tuan Ho) AI scaling hinges on programmable, intelligent grids; India Energy Stack enables distributed energy trading (Prince Dhawan) Full AI stack development – software to physical infrastructure – is essential for AI deployment (Vrushali Gaud)
All three endorse stronger US‑India cooperation for AI‑related infrastructure, yet Ujjwal and Tuan focus on rare‑earth minerals, whereas Prince emphasizes the Indian grid and energy stack as the key enabler. [17-22][55-57][161-166]
Speakers: Ujjwal Kumar, Tuan Ho, Prince Dhawan
US‑India rare‑earth corridor and strategic investments highlighted (Ujjwal Kumar) Critical minerals supply chain vulnerability and need for US‑India collaboration (Tuan Ho) India Energy Stack as a platform for AI‑driven power demand (Prince Dhawan)
Takeaways
Key takeaways
AI scaling depends on a robust physical stack—critical minerals, energy grids, semiconductors, and data‑center/edge infrastructure. The US‑India rare‑earth corridor is seen as essential to reduce strategic vulnerability and support AI hardware supply chains. Investors see a mismatch: abundant funding for AI models but insufficient capital for underlying infrastructure such as power grids, mineral processing, and low‑power edge chips. Modern, programmable, and renewable‑focused energy grids (e.g., India Energy Stack) are critical to meet AI’s power demand and enable distributed sourcing for data centers. Google’s $15 bn India commitment is driven by the large user base, growth potential, and the need for clean‑energy‑linked infrastructure; its Climate Tech Center will target green skilling, low‑carbon materials, and sustainable aviation fuel pilots. Future AI value will shift from large data‑center‑centric compute to low‑power edge devices, requiring new semiconductor designs and adaptable hardware strategies. Entrepreneurial success hinges on targeting clear infrastructure problems, leveraging cutting‑edge AI tools to reduce capital needs, and moving quickly to adopt state‑of‑the‑art technology.
Resolutions and action items
Continue deepening US‑India collaboration on the critical‑minerals supply chain (e.g., support for Vulcan Elements and related ventures). Leverage government financing programs (US federal, Indian ministries) to fund AI‑related infrastructure projects, especially grid modernization and renewable integration. Google to operationalize its Climate Tech Center in India, focusing on green skilling, low‑carbon construction materials, and sustainable aviation fuel pilots. Encourage early‑stage founders to pursue “low‑hanging‑fruit” infrastructure problems (e.g., grid‑interoperability platforms, renewable integration tools).
Unresolved issues
Potential over‑building of AI compute capacity and the resulting ROI challenges remain uncertain. How to align private‑sector venture financing with the long‑term, capital‑intensive nature of grid and mineral‑processing projects. Specific pathways for sourcing critical minerals outside of China and scaling refining capacity have not been fully detailed. Mechanisms for rapid, near‑real‑time settlement of distributed energy trades for data‑center power are still in development. Risk of hardware obsolescence (e.g., GPU cycles) and its impact on debt financing structures lacks a clear solution.
Suggested compromises
Balance investment between data‑center expansion and edge‑device development to avoid over‑concentration on one side of the stack. Right‑size infrastructure spending by matching grid‑upgrade timelines (decades) with AI deployment cycles (quarters) using the India Energy Stack as a coordination layer. Combine government‑backed large‑scale funding with targeted private‑sector venture capital for specific infrastructure “low‑hanging‑fruit” opportunities. Adopt a phased approach: prioritize renewable energy and grid modernization now, while allowing flexibility for future hardware upgrades to mitigate obsolescence risk.
Thought Provoking Comments
AI is forcing creative destruction of how the world builds infrastructure – from critical minerals to energy, semiconductors, and physical edge systems – and we are seeing the largest infrastructure build‑out in human history.
Sets the macro context that shifts the conversation from AI models to the material and energy foundations required for scaling AI, framing the entire panel’s focus.
Established the central theme, prompting each subsequent speaker to address their slice of the infrastructure stack (minerals, power grids, data centers, etc.) and aligning the discussion around tangible, cross‑sector challenges.
Speaker: Ujjwal Kumar
We often talk about the industrial revolution AI will create, but we forget the underlying inputs – clean power, critical minerals, and decades‑old power grids – which represent huge low‑hanging fruit for investors.
Highlights a blind spot in AI discourse, redirecting attention to the foundational supply‑chain and grid modernization opportunities that are under‑invested.
Shifted the dialogue from model hype to concrete investment opportunities, leading Jeff and others to discuss grid resilience, renewable integration, and the risk of over‑building.
Speaker: Tuan Ho
AI will drastically change the ability to leverage cross‑border talent, especially on the front‑end of products, allowing entrepreneurs to bring ideas to market with a fraction of the capital previously required.
Introduces the idea that AI not only drives hardware demand but also transforms software development economics and talent dynamics across geographies.
Prompted discussion on speed of innovation, lowered capital barriers, and later fed into concerns about over‑build and ROI, influencing the conversation about market dynamics and investor challenges.
Speaker: Jeff Binder
Why India? Because it’s a billion‑plus user market with a young, tech‑savvy population that can leapfrog traditional growth paths, combined with favorable clean‑energy economics and a nascent digital energy stack.
Provides a concise, multi‑dimensional justification for focusing AI infrastructure investment in India, linking market size, talent, policy, and energy potential.
Validated Ujjwal’s earlier points, deepened the focus on India, and set the stage for Prince’s detailed explanation of the India Energy Stack and grid programmability.
Speaker: Vrushali Gaut
AI will not scale unless power is programmable; the binding constraint will be intelligent, resilient grids, not chips. India’s Energy Stack creates interoperable, near‑real‑time layers that let data centers source power from millions of distributed rooftop solar assets.
Introduces a novel concept—programmable electricity and P2P energy trading—as the critical enabler for AI compute, reframing the bottleneck from hardware to grid intelligence.
Shifted the conversation to the operational side of energy, inspiring follow‑ups from Vrushali and Jeff about ROI risks and the need for new grid business models.
Speaker: Prince Dhawan
The current data‑center build is the ‘five computers’ of our era; the next wave will be billions of edge devices that run AI locally, demanding a shift from feeding a central beast to creating ultra‑efficient, low‑power models.
Provides a forward‑looking analogy that expands the scope beyond data centers to edge AI, highlighting a future paradigm shift in hardware and energy consumption.
Prompted Jeff to affirm the central‑to‑decentral transition, introduced the idea of edge‑centric ROI, and added depth to the discussion about long‑term sustainability of AI infrastructure.
Speaker: Tobias Helbig
There is a mismatch between what’s being funded in pure AI model startups and what’s needed in infrastructure‑type businesses; infrastructure problems are clearer, more durable, and less prone to rapid obsolescence.
Challenges the prevailing funding trends, urging a reallocation of capital toward foundational infrastructure rather than fleeting model hype.
Reoriented the latter part of the panel toward funding strategy, influencing Jeff’s remarks on measurable outcomes and the risk of over‑investment in volatile hardware.
Speaker: Tuan Ho
Government financing at the scale of hundreds of billions in the US and comparable commitments in India creates a unique environment where the industrial revolution driven by AI and the industrial revolution required by AI can happen simultaneously.
Synthesizes the macro‑economic backdrop, emphasizing policy as a catalyst that aligns AI demand with supply‑side investments.
Served as a concluding turning point, tying together earlier themes of infrastructure, energy, and investment, and leaving the audience with a forward‑looking, policy‑driven outlook.
Speaker: Tuan Ho (closing)
Overall Assessment

The discussion was anchored by Ujjwal’s framing of AI as an infrastructure challenge, which opened space for each expert to surface a distinct layer of the problem—critical minerals, power grids, talent, and edge computing. The most pivotal moments occurred when Tuan highlighted the overlooked supply‑chain and grid issues, Prince introduced the concept of programmable power via the India Energy Stack, and Tobias shifted focus to the impending edge‑device wave. These insights redirected the conversation from model hype to concrete, systemic bottlenecks and investment strategies, prompting participants to explore risk, ROI, and policy dimensions. Collectively, the key comments steered the panel toward a holistic view of AI’s future—one that intertwines technology, energy, geography, and government action—thereby deepening the analysis and setting a clear agenda for innovators and investors.

Follow-up Questions
What are the specific investment opportunities and structures for the US‑India critical minerals corridor, especially regarding rare‑earth magnet supply chains?
Understanding the investor side of the corridor is crucial for mobilizing capital to secure critical mineral supplies needed for AI hardware.
Speaker: Ujjwal Kumar (asked to Tuan Ho)
How can the supply chain for critical minerals be sourced, refined, and scaled to meet AI infrastructure demand?
Identifying sources, refining capacity, and logistics is essential to reduce strategic vulnerabilities and support AI hardware production.
Speaker: Tuan Ho (implied)
What are the most effective strategies for upgrading and modernizing power grids, particularly in India, to handle the programmable and high‑peak demand of AI data centers?
Grid modernization is a bottleneck for AI scalability; research is needed on technologies, financing, and timelines for grid upgrades.
Speaker: Tuan Ho (implied)
What are the risks and potential ROI implications of a possible over‑build of AI infrastructure, and how can investors mitigate these risks?
Over‑investment could lead to stranded assets; analyzing scenarios helps investors make informed decisions.
Speaker: Jeff Binder (implied)
How feasible is peer‑to‑peer (P2P) energy trading for powering data centers using distributed rooftop solar, and what regulatory or technical frameworks are required?
P2P trading could unlock new renewable sources for AI workloads, but requires robust measurement, settlement, and policy mechanisms.
Speaker: Prince Dhawan (implied)
What low‑carbon materials (e.g., steel, cement) can be developed and scaled for construction of AI data centers and other infrastructure to reduce embodied carbon?
Materials innovation is needed to align AI infrastructure expansion with climate goals.
Speaker: Vrushali Gaud (implied)
How can sustainable aviation fuel (SAF) pilots be designed and implemented in fast‑growing Indian aviation markets to support AI‑driven logistics and travel?
SAF represents a growing low‑carbon opportunity; pilots would provide data on scalability and impact.
Speaker: Vrushali Gaud (implied)
What advances are required in ultra‑low‑power edge AI chips and battery technologies to enable long‑duration, autonomous AI devices?
Edge AI devices will be the next wave; research into power‑efficient hardware is critical for widespread deployment.
Speaker: Tobias Helbig (implied)
How will future hardware roadmaps (e.g., GPU, ASIC breakthroughs) affect the obsolescence risk of current data center investments?
Predicting hardware evolution helps investors avoid stranded infrastructure and guides strategic planning.
Speaker: Jeff Binder (implied)
What is the impact of large‑scale government financing (e.g., US federal, Indian ministries) on accelerating AI‑related infrastructure projects, and how can private investors align with these policies?
Understanding policy‑driven funding streams can shape investment strategies and public‑private partnerships.
Speaker: Tuan Ho (implied)
How does the FORGE global framework for AI‑critical minerals operate, and what gaps exist in its implementation across countries?
Assessing the effectiveness of FORGE will inform international coordination on mineral supply security.
Speaker: Ujjwal Kumar (referencing summit)
What skill‑development programs are needed in Tier‑2 and Tier‑3 Indian cities to build a workforce capable of supporting green and AI technologies?
Workforce readiness is essential for scaling clean‑energy and AI projects in emerging regions.
Speaker: Vrushali Gaud (implied)
Is there a mismatch between the types of AI startups receiving funding (e.g., model‑centric) versus the infrastructure‑focused ventures needed for sustainable AI growth?
Identifying funding gaps can redirect capital toward durable, high‑impact infrastructure solutions.
Speaker: Tuan Ho (explicit)
How can programmable, resilient grids be designed to meet the real‑time, high‑peak compute demand of AI workloads at scale?
Programmable grids are a prerequisite for reliable AI compute; research is needed on control systems and scalability.
Speaker: Prince Dhawan (implied)
What business models and financing structures best support the integration of renewable energy into AI data center operations to ensure economic viability?
Aligning clean‑energy adoption with profitable data center operation requires innovative financing and operational models.
Speaker: Vrushali Gaud (implied)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Regulating Open Data_ Principles Challenges and Opportunities

Regulating Open Data_ Principles Challenges and Opportunities

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel debated whether India should shift from voluntary open-data initiatives to a statutory regulatory framework that obliges government bodies to share standardized aggregated datasets, arguing that without legal teeth participation is uneven, leading to unreliable data for investors and developers, and that the core question is whether openness should become an institutional obligation rather than optional goodwill [11-13][28-30].


Shashi Tharoor framed the issue as a question of power, noting that AI now underpins modern society and that regulation of open data shapes sovereignty, innovation and fairness [42-51]. He defined open data as minimally restricted data that, when thoughtfully designed, becomes public infrastructure that strengthens transparency, levels market playing fields and enables citizen participation [56-61]. He illustrated the impact of open data with examples such as the U.S. release of meteorological data that spawned private ecosystems in weather forecasting and logistics, and the COVID-19 health dashboards that accelerated coordinated responses [68-71][73-74]. India’s own open-government data platform has been used to track welfare coverage and expose implementation leakages, demonstrating tangible governance benefits [66].


Panelists warned that poorly structured openness can create vulnerabilities, exacerbate digital ascendancy of foreign cloud providers, and lead to data capitulation without domestic capacity building [77-82]. They proposed a credible framework that begins with a clear purpose, strong anonymisation, consent mechanisms, accountability standards and links openness to domestic research, startups and digital infrastructure [88-95][96-100]. While cross-border flows remain essential, the framework should ensure reciprocity and protect policy space, a stance echoed in the G20 New Delhi Leaders Declaration and the UN Global Digital Compact [101-106][107-111].


Rama Vedashree traced the Indian open-data movement to early-2010s policies, emphasizing the need for AI-ready data, metadata standards and API-based access rather than static CSV files [147-164]. Irina Ghose highlighted that trust requires contextual Indian-language data, a model-context protocol (MCP) for interoperability, and collaborative efforts with global partners to make data openly available for AI development [180-191]. Cyril Shroff argued that regulatory clarity is a foundation for innovation, likening data markets to capital-markets where uniform rules create investor confidence and trust [201-204][268-278]. Arun Prabhu pointed out the absence of clear anonymisation standards, public-data interchange protocols and purpose definitions, asserting that without these legal pillars a sustainable open-data ecosystem cannot emerge [259-262]. Sasmit Patra stressed that citizen consent and political willingness are crucial, noting that even anonymised transaction data may face resistance without trust [221-227]. Asha Jadeja Motwani warned that reliance on the U.S. technology stack creates geopolitical risk and called for a joint regulatory framework to ensure data benefits flow back to India, reinforcing the panel’s consensus that structured openness with safeguards is essential for a sovereign digital future [327-339].


Overall, the discussion converged on the need for a purpose-driven, secure and capacity-building open-data regulatory regime that balances transparency, innovation and national sovereignty, positioning India to shape a fairer digital order.


Keypoints

Statutory regulation is needed to move India from voluntary open-data initiatives to a binding framework that guarantees consistent, secure, and accountable data sharing across ministries.


The opening scenario frames the debate around “statutory mandates … that actually requires government bodies to share standardized aggregated data sets” and the risks of “uneven participation” and “no enforcement” ([11-20]). Tharoor stresses that “regulation of open data is … a question of power” and that a “legal backbone” is essential for “institutional obligation” ([50-53]). Vedashree recounts the early, largely voluntary policy (NDCEP, 2012) and notes that “the focus was on opening up government data … not really a primary objective” and that “just opening up government data is not enough” ([147-166]). Arun highlights the absence of “clear identified anonymisation standard, clear identified public data interchange standards” and a “clear recognised purpose” in current law, arguing that without these “sustainable open data ecosystem” cannot emerge ([259-262]).


Open data functions as public infrastructure that can drive transparency, innovation, and economic growth when paired with proper standards and capacity-building.


Tharoor cites the U.S. release of meteorological data that “laid the groundwork for entire private ecosystems” and the COVID-19 dashboards that “enabled faster responses” ([68-74]). He also points to India’s own open-government platform that “has been used to track welfare coverage and expose leakages” ([66]) and to IndiaStack’s role in scaling inclusive digital services ([119-123]). The argument is that “when data is treated as shared infrastructure … it lowers barriers, improves decision-making, and enables societies … to turn information into durable capacity” ([75-76]).


Data sovereignty and digital ascendancy raise geopolitical concerns; without domestic capacity, open data can exacerbate inequality and external capture of value.


Tharoor describes the “digital ascendancy” where “most of the world’s large cloud servers … are owned … by a small number of technology companies” and explains how data generated in developing countries is often processed abroad, leading to “digital capitulation” and loss of value ([77-82]). He calls for “openness with guardrails” that “creates resilience” and stresses that “openness must be tied to domestic capacity building” ([86-91], [96-100]).


Practical implementation requires AI-ready data standards, interoperable APIs, sector-specific protocols, and a federated approach to avoid siloed “dark data.”


Vedashree stresses the shift from PDFs/CSVs to “AI-ready open data” with “metadata and its standards … critical for interoperability” and the need for APIs and real-time access ([158-166]). She also calls for a “supply-demand gap assessment” and sector-level data opening (e.g., payment systems directive, open banking) ([236-245]). Irina (Anthropic) describes the “MCP protocol” as a universal connector for contextual Indian data ([188-190]). Cyril links regulatory clarity to investor confidence, likening data governance to capital-market regulation that “creates trust” and enables “multibillion-dollar investments” ([268-278]).


Governance, oversight, and trust mechanisms (courts, ethics bodies, watchdogs) are essential to ensure that the regulatory framework is enforceable and does not become a “watch-the-watchers” blind spot.


An audience question asks “who would be watching the watchers?” and Cyril answers that “the answer lies in our constitution … the courts and the rule of law” and the need for an ethics code for AI ([384-387]). Tharoor later warns that “justice delayed … justice denied” undermines confidence in any regulatory regime ([390-395]).


Overall purpose / goal


The panel was convened to explore how India can design a robust, statutory open-data regulatory framework that supports the rapid growth of AI, safeguards privacy and sovereignty, and transforms public data into a catalyst for inclusive economic development.


Overall tone


The discussion begins with a light, imaginative role-play to frame the issue, then shifts to a serious, analytical tone as experts present evidence, critique existing policies, and propose concrete reforms. Throughout, the tone remains collaborative and forward-looking, though moments of urgency and tension appear when addressing geopolitical risks, legal gaps, and the need for strong enforcement. The conversation closes on a hopeful yet cautious note, emphasizing both opportunity and the imperative for disciplined governance.


Speakers

Asha Jadeja Motwani – Founder, Motwani Jadeja Foundation; established the Motwani Jadeja Institute for American Studies; venture-capitalist investing in tech and AI. [S1]


BK Patnaik – Audience member from Odisha (Orissa); asked a question to Dr. Patra about AI in agriculture. [S4]


Rama Vedashree – Senior official/panelist involved in India’s open-data initiatives; contributed to the design of the National Data Sharing and Accessibility Policy. [S6]


Shashi Tharoor – Dr.; Member of Parliament (India), former diplomat and author; delivered the keynote address. [S9][S11]


Cyril Shroff – Managing/Founding Partner, Cyril Amachal Mangaldas; convener of the panel and benefactor of the Cyril Shroff Centre for AI Law and Regulation. [S12]


Irina Ghose – Managing Director, Anthropic India. [S15]


Audience Member 1 – Audience participant who asked “who will be watching the watchers?” [S18]


Arun Prabhu – Partner and Co-Head, Digital and TMT Practice, Cyril Amachal Mangaldas; also partner at Cerebral Medicine, Mangadas. [S22]


Dr. Sasmit Patra – Member of Parliament; member of the Parliamentary Oversight Committee on Communications and IT; speaker on evidence-based policymaking. [S11]


Audience Member 3 – Audience participant who raised a question about men’s health data and its regulatory handling. [S26]


C. Raj Kumar – Moderator of the panel; President and Editor-in-Chief of DevX (as per external source). [S29][S30]


Additional speakers (not listed in the provided names):


Jim Hacker – Fictional Prime Minister of the United Kingdom (used in the scenario).


Sir Humphrey Appleby – Fictional Cabinet Secretary of the United Kingdom (used in the scenario).


Sir Bernard Hooley – Fictional Principal Private Secretary of the United Kingdom (used in the scenario).


Full session reportComprehensive analysis and detailed insights

The session opened with C. Raj Kumar, senior policy adviser at the Ministry of Electronics & Information Technology, staging a brief role-play that placed the UK Prime Minister Jim Hacker and senior civil servants in a mock cabinet meeting. He used the imagined exchange – in which the Prime Minister called the open-data session “the most important … at the entire Global AI Summit” and asked whether India should move from “voluntary open-data initiatives to a statutory regulatory framework that actually requires government bodies to share standardised aggregated data sets” – to frame the debate as one between optional goodwill and legally-backed obligation [7-13][28-30][11-13][50-53].


Shashi Tharoor, Minister of State for External Affairs, delivered the keynote, positioning artificial intelligence as the operating system of modern society and arguing that open-data regulation is fundamentally a question of power, sovereignty and fairness [42-51]. He defined open data in its simplest form as “data that is made accessible for use, reuse and redistribution with minimal legal or technical barriers” [56-57] and stressed that, in the AI age, it also signals an intent about how knowledge is shared and how power is distributed [58-61]. Tharoor illustrated the transformative potential of open data with two concrete examples: the United States’ release of meteorological data, which “laid the groundwork for entire private ecosystems in weather forecasting, logistics, insurance and risk assessment” [68-71], and the COVID-19 dashboards that “enabled faster responses, improved coordination across agencies, and supported more informed public debate” [73-74]. He also noted that India’s own open-government platform has already been used to “track welfare coverage and expose leakages in implementation” [66-68].


Rama Vedashree, former senior civil servant and architect of the National Data Sharing and Accessibility Policy (NDCEP), traced the origins of India’s open-data movement to the early 2010s, highlighting the NDCEP of 2012 and the launch of data.gov.in, which were initially focused on “opening up government data … for research and policy-making” rather than on innovation [147-155][156-158]. She warned that the legacy approach of publishing static CSV or PDF files is now obsolete; modern AI requires “AI-ready open data … always available, with metadata and standards … consumable via APIs” [158-166][162-166]. Vedashree called for a “supply-demand gap assessment” to map which datasets are needed by researchers, startups and sectoral regulators, and to ensure that data is released in AI-ready formats with interoperable metadata [236-242][236-245]. She cited the UK’s Payment Systems Directive and Open Banking initiative as precedents for sector-level data-access regimes [236-242]. Emphasising a federated, sector-specific approach, she warned that “institutional data … is getting locked and siloed” and that “dark data” must be opened in a secure, anonymised way [164-168][236-245]. Concluding, Vedashree advocated a federated open-data strategy – an ARAD framework – to coordinate initiatives across ministries [350-352].


Irina Ghose, Managing Director, Anthropic India, highlighted that India accounts for the highest usage of Anthropic’s CLOD tool, underscoring the country’s appetite for generative-AI services [188-190]. She introduced the Model-Context Protocol (MCP), a “universal connector” created in 2024 and open-sourced to the Linux community, which provides contextual Indian-language, domain-specific data and enables “trust-first innovation” through transparent, API-first sharing [190-191]. These proposals aim to make data consumable not only by end-users but also directly by AI systems, thereby meeting the “AI-ready” requirement highlighted earlier.


Cyril Shroff, senior partner at Shardul Amarchand Mangaldas, argued that regulatory clarity is a prerequisite for innovation, likening data governance to capital-market regulation: “if you can just substitute the word capital market by the data and the digital world, you get the same answer” – namely that trust arises from “regulatory clarity, enforcement, good accounting standards and uniform regulatory language” [268-278]. He maintained that such trust would attract “multibillion-dollar investments, data-centres, and a shift from a services-based to a product-based tech sector” [279-283].


Sasmit Patra, senior fellow at the Centre for Policy Research, advocated a “soft-touch” regulatory model that classifies data into three tiers – public-good, national-security and commercially exploitable – and tags each set so that cross-border flows can be managed without eroding policy space [288-298][295-298]. He emphasized that sector-specific safeguards are essential, pointing to Germany’s provision that allows patients to voluntarily share health data for research [358-361].


Arun Prabhu, director of the Centre for Data Governance, highlighted the legal vacuum, noting the absence of “a clear identified anonymisation standard, clear identified public data interchange standards, and a recognised purpose for processing public data” [259-262]. He called for statutory clarity on purpose, standards and enforcement mechanisms.


Asha Jadeja Motwani, senior economist at the NITI Aayog, warned that India’s reliance on the “American stack” – from chips to APIs – creates a strategic vulnerability. She suggested that if India consciously chooses this stack, a “joint regulatory framework” with the United States is needed to ensure that data benefits flow back to India and that “our hands are tied just like their hands would be tied” [327-339].


During the audience Q&A, a participant asked “who will watch the watchers?” [344]. Shroff replied that India’s constitution, the courts and the rule of law provide the ultimate oversight, complemented by an emerging AI ethics code [384-387]; he acknowledged the judiciary’s backlog but stressed that “the courts … are the one answer in India” [385-387]. Tharoor added that India’s courts are burdened with roughly 50 million pending cases, limiting reliance on a rule-of-law narrative alone [389-395]. A second question raised the scarcity of gender-specific health data. Vedashree noted that personally identifiable health data will likely remain closed, but cited Germany’s health-data sharing provision as a model for voluntary, anonymised contributions [358-361]. Patra reinforced the need for progressive regulation and public awareness to enable such sharing [350-361]. A third question concerned farmers’ lack of equipment, electricity and internet; Tharoor highlighted this gap, warning that AI cannot reach those without basic infrastructure [389-393] and cautioning against a scenario where Indian data fuels proprietary AI that “the 10 000 people here can’t afford” [399-415].


The discussion concluded with Tharoor reminding the audience that the promise of open data must be anchored in reality, and that “the purpose of health data aggregation ought to be to solve similar problems for other people.” Shroff reiterated that a statutory, purpose-driven, AI-ready open-data regime – with strong privacy safeguards, capacity-building measures, a federated architecture and geopolitical foresight – is essential for India to transform data into a catalyst for transparent governance, inclusive innovation and a sovereign digital future [88-100][119-123][259-262][327-339]. Raj Kumar closed by echoing the opening call for a binding legal framework that treats data as public infrastructure, delivered in interoperable, AI-ready formats, and overseen by India’s courts and an ethics regime.


In sum, the panel converged on several key takeaways: a binding statutory regulatory framework is required to overcome the unevenness of voluntary schemes; open data must be treated as public infrastructure and delivered in AI-ready, interoperable formats; robust anonymisation, informed consent and grievance mechanisms are non-negotiable; domestic digital capacity and sector-specific strategies (including the ARAD federated model) are vital to prevent data capitulation; reliable public data can boost investor confidence and economic growth; geopolitical dependence on foreign technology stacks must be mitigated through joint regulatory arrangements; and ultimate oversight will rest on India’s courts, complemented by an AI ethics regime. These conclusions chart a roadmap for India to shape a fairer digital order while safeguarding its sovereignty and development goals.


Session transcriptComplete transcript of the session
C. Raj Kumar

and Mr. Arun Prabhu, Partner and Co -Head, Digital and TMT Practice, Cyril Amachan Mangaldas. We also have the distinguished presence of Ms. Asha Jadeja Motwani, Founder of the Motwani Jadeja Foundation. So, before we begin, I intend to invite Dr. Shashi Taru to deliver a keynote address, but given the extraordinary significance of the discussion today we are having, I quickly created a scenario where this Global AI Summit is expected to be attended by many individuals, and many have attended. I have created a scenario where I transport you back to the Prime Minister’s office in the United Kingdom. Imagine Jim Hacker, PM of UK, Sir Appleby, the Cabinet Secretary of UK, as well as Sir Bernard Hooley, the Principal Private Secretary, are here to attend this.

So, I am going to create a scenario for three minutes. Bear with me. hacker, the Prime Minister says, Humphrey, I’ve decided this is the most important session at the entire Global AI Summit. And Humphrey says, Prime Minister, with respect, there are panels on frontier AI, sovereign computing and semiconductor strategy. Hacker, exactly. All terribly glamorous, but this one is about open data, the plumb. Without it, the rest is just PowerPoint. Bernard, yes, Prime Minister, they are discussing whether India should move from voluntary open data initiatives to a statutory regulatory framework that actually requires government bodies to share standardized aggregated data sets. Sir Humphrey says, requires? Hacker, yes, Humphrey, safeguards, incentives, accountability, coordination between ministries, even defined economic models for access, free, paid and restricted tiers.

Sir Humphrey replies, Prime Minister, the beauty of open data policies is that they are aspirational. Once you introduce statutory mandates, you risk consistency. Hacker, That’s the point. They’re arguing that without a legal backbone, participation is uneven. Some departments share, others don’t. No uniform standards, no enforcement. Investors get nervous. All developers complain about unreliable data sets. Bernard, and apparently, high -quality public data improves evidence -based policymaking, targeted welfare delivery, and even capital formation. Humphrey replies, yes, Bernard, but also improves scrutiny. Hacker, Humphrey, they’re not just talking about dumping spreadsheets online. They’re debating architecture, secure environments, anonymization protocols, synthetic data, interoperable standards. And Bernard replies, and ensuring privacy and copyright protections don’t clash with open data objectives.

Sir Humphrey says, Prime Minister, when privacy, innovation, geopolitics, and economic growth are all mentioned in the same regulatory framework, one usually convenes a task force to study it indefinitely. Hacker replies, but that’s precisely what they are avoiding. They are asking the real question, should there be regulatory teeth so that government data sharing isn’t optional goodwill but institutional obligation? Hacker replies, they are also discussing geopolitical standards and safeguards, access restrictions. In other words, minister structured openness rather than chaotic transparency. And Humphrey replies, structured openness is merely closeness with better branding. Hacker, Humphrey, if AI is the future, the data is raw material. And if government holds the richest data sets, then refusing to regulate sharing properly is like building a digital economy and locking the warehouse.

And Hacker replies, Humphrey, that’s why this is the most important panel. Everyone is discussing what AI can do. They are discussing what governments can do. I want to stop here and invite my dear friend and mentor, Dr. Shashi Tharoor, to deliver the keynote address. Thank you.

Shashi Tharoor

Thank you. That was delightful, Raj. I was terrified for a minute that you were going to get me to play Sir Humphrey or something. But this is a pleasure to join you all this evening at the India AI Impact Summit 2026 and to share a few reflections on the subject that Raj has so cleverly animated for all of you, exploring a regulatory framework for open data. When artificial intelligence is no longer a distant frontier of innovation, it is rapidly becoming the operating system of our modern society. What was once theoretical is now embedded in our markets, our governance systems, and increasingly our personal choices. A nation’s digital footprint, a sort of triad of three Cs, commerce, communication, and cognition, is now its primary source of wealth.

We’re often told, almost as an article of fiction, that data is the new oil. Yet, as Chris Miller reminds us in his compelling account in Chip War, the real constraint of the AI age is not the volume of data, but the power to process it. That single sentence punctures a convenient myth. It is, excuse me, it tells us that abundance alone does not confer agency, and that openness without capacity can entrench inequality as easily as it can enable progress. The decisive question, therefore, is not how much data exists, but who controls its use, who extracts its value, and who is left behind. Seen in this light, the regulation of open data is not a technical footnote.

It is a question of power, shaping sovereignty and surveillance, innovation and inclusion, freedom and fairness in our digital age. age. It’s a privilege to share this platform with such a distinguished and accomplished group of colleagues under the stewardship of my good friend Rajkumar, whose intellectual leadership has shaped conversations on law and global governance. I’m honoured, of course, to engage alongside Shival Shroth, Asha Jadeja Motwani, Arun Prabhu, Mirama Vedushree, Aireena Ghosh, and my parliamentary colleague, though in a different house, Sasmit Patra, individuals whose expertise across law, technology, policy, industry, and democratic institutions has profoundly shaped the very debates we’re having today. To speak in the company of such authority is both an honour and a responsibility, and so how, let me ask, might we craft a regulatory framework for open data that is equal to the ambitions we all have and the anxieties many of us are expressing about the AIA?

So to begin with, we must be clear about what we mean by open data. At its most basic, it refers to data that is made accessible for use, reuse and redistribution with minimal legal or technical barriers. Yet in the context of the AI age, open data is far more than a question of access. It’s a statement of intent about how knowledge is shared, how power is distributed and how societies choose to govern the informational foundations of innovation. When designed thoughtfully, open data becomes more than a technical tool, it becomes public infrastructure. It strengthens transparency in government, levels the playing field in markets and creates genuine avenues for citizen participation. But when released without clarity, safeguards or purpose, as Bernard pointed out in Raj’s presentation, it risks becoming little more than symbolic.

A sort of symbolic nod to open data. It can turn into an unguarded channel through which value, agency and even sovereign control quietly drift elsewhere. We all know that open data can be genuinely transformative. We’ve seen how making government data publicly accessible can strengthen democratic accountability, whether it’s citizens tracking public spending, researchers analysing welfare delivery or civil society organisations flagging gaps in implementation. India’s own open government data platform has been used to track welfare coverage and expose leakages in implementation that might have otherwise remained invisible. But the value of open data extends beyond transparency alone. When the United States chose to release meteorological data freely, they did more than increase transparency. They laid the groundwork for entire private ecosystems in weather forecasting, logistics, insurgency and security.

They laid the groundwork for entire private ecosystems in weather forecasting, logistics, insurance and risk assessment. What began as public infrastructure became the foundation for commercial and technological growth. Its importance becomes even clearer in times of crisis. During the COVID pandemic, openly shared health data and public dashboards enabled faster responses, improved coordination across agencies, and supported more informed public debate. So if we take these examples, the lesson is consistent. When data is treated as shared infrastructure rather than as a guarded asset, it lowers barriers, improves decision -making, and enables societies, particularly in the developed world, I’m sorry, in our developing world, rather, to turn information into durable capacity. And yet, my dear friends, openness alone is not a panacea.

Open data, poorly structured, can generate new vulnerabilities, even as it promises transparency. without safeguards openness may devolve into tokenism data sets released without context quality control or enforceable standards or worse into asymmetrical extraction there is a trilemma of digital governance digital ascendancy digital capitulation and digital sovereignty today most of the world’s large cloud servers and advanced artificial intelligence systems are owned and operated by a small number of technology companies based primarily in the United States and parts of Europe this is digital ascendancy this means that data generated in developing countries whether it’s mobility data from ride sharing apps digital payment transactions agricultural statistics or health records is often stored, processed and analyzed on infrastructure located abroad When that data is then used to train AI systems, improve algorithms or develop commercial digital services the profits, patents and technological advantages tend to accumulate where the platforms are headquartered not where the data is originally generated Put simply, the location where data is produced is not necessarily the location where value is created This is where the question of data sovereignty arises If countries do not invest in their own digital infrastructure and regulatory capacity the benefits of open data can accrue disproportionately outside their jurisdiction One -sided concessions on digital taxation and digital trade are a form of data capitulation Indonesia and Malaysia have succumbed in their trade agreements with the US We must not Thank you This dynamic is increasingly playing out in real policy debate It is visible in digital trade negotiations where restrictions on data localization or limits on source code disclosure can narrow the policy space of developing economies seeking to nurture domestic digital industries.

It is also evident in the market concentration of hyperscale cloud providers whose global dominance shapes where data is stored, processed, and ultimately valorized. The issue is not cross -border data flows per se. Digital cooperation depends on them. The concern is whether openness is reciprocal and capacity enhancing or whether it systematically positions some countries as suppliers of raw data while others capture downstream gains in artificial intelligence, advanced analytics, and platform governance. An instructive example, when the U .S. sought to compel the divestiture of TikTok, TikTok was the first to be introduced into the market. TikTok was the first to be introduced into the market. Its demands included mandatory data localization, majority U .S. ownership in the restructured entity, and U .S.

control over source codes. This is data sovereignty on steroids, and it’s exactly what the rest of us seem to be only able to aspire to. The answer, therefore, is not to retreat from openness, but to shape it deliberately. If openness without strategy creates imbalance, then openness with guardrails can create resilience. A credible regulatory framework for open data must begin with clarity of purpose. Why is this data being released? For whom and under what safeguards? It must ensure strong anonymization and privacy protections so that transparency does not come at the cost of individual rights. Closely linked to this is the principle of consent and control. Individuals and communities should have meaningful agency over how data derived from them is used, shared, and repurposed.

particularly when data sets are combined, commercialized, or deployed in AI systems. Consent must be informed, revocable where possible, and supported by accessible grievance mechanisms. The framework must also build accountability into the system, clear standards for access, independent oversight, anonymization, and remedies when misuse occurs. And critically, openness must be tied to domestic capacity building. Data sovereignty has little meaning without adequate capacity. Public data should not simply circulate globally. It should strengthen local research institutions, startups, digital infrastructure, and technological expertise. Domestic digital law should prevail over foreign commitments. At the same time, none of this implies that countries should isolate themselves digitally. Cross -border data flows are essential to research collaboration, to trade, to financial systems, and technological innovation.

Digital ecosystems simply do not function in silos. However, enabling data to move across borders should not mean that countries give up the ability to regulate how that data serves their own development priorities. Interoperability should facilitate cooperation, not erode policy space. This balance between openness and sovereignty is already reflected in recent multilateral commitments. The G20 New Delhi Leaders Declaration in 2023 placed digital public infrastructure at the centre of inclusive growth and emphasised data for development, linking data governance with trust, security and domestic capacity building. The message was clear. Data must support development, not undermine regulatory accountability. Similarly, the Global Digital Compact adopted by the United Nations calls for safe and transparent trustworthy data governance, stronger digital capacity in developing countries.

and international cooperation that respects national regulatory frameworks. Together, these signals suggest that the emerging consensus is not about unrestricted flows or digital isolation, but about structured openness where innovation and cooperation coexist with sovereignty and institutional strength. If we widen the lens, what emerges is not a contest between openness and sovereignty, but a conversation about how different regions are navigating that balance. The European Union has demonstrated how strong regulatory architecture, through instruments such as data protection and digital market rules, can shape global standards. India, by contrast, has shown how digital public infrastructure can scale inclusion at population level. India is putting innovation ahead of regulation. These are not competing models. They are complementary experiments in digitalization.

Global governance and increasingly the global south is not merely observing this evolution, it is participating in it. India’s experience with IndiaStack illustrates what this participation can look like. By building interoperable layers, digital identity through Aadhaar, real -time payments through UPI, document exchange through DigiLocker, India has created a public digital backbone that supports innovation while remaining accessible and adaptable. Crucially, this architecture has been offered as a template for other developing countries, seeking scalable and affordable digital solutions. In doing so, India has reframed digital infrastructure not as proprietary leverage but as a developmental public good. Of course, much remains to be done. Questions of data protection enforcement, AI governance, cyber security, resilience and equitable access require sustained attention.

but the direction is clear India is not approaching the digital future as a passive market, it is shaping it as an architect as conversations advance from G20 to the Global Digital Compact and now through initiatives such as this India AI Impact Summit the emphasis is increasingly on responsible innovation capacity building and inclusive growth. Our trade agreements must not promote digital dependency or virtual vassalage. We must emerge as a digital sovereign empowered to protect our own giants and capture the wealth generated by our own data. Friends, the task before us is not to choose between openness and control but to design systems that honour both. If we succeed open data will not be a source of vulnerability but of empowerment and in that journey India alongside partners such as the EU and our fellow countries of the global south has the opportunity not merely to catch up but to help define the rules of a fairer digital order rather than subject ourselves or submit to subaltern status under a new extractive digital at large.

Okay, Raj?

C. Raj Kumar

Thank you, Shashi, for setting the tone for this. So we, as you can see, of course, Prime Minister Hacker and Humphrey and others are sitting here, and then after hearing this speech, Humphrey is remarking, Prime Minister, if we start to believe what Shashi Tharoor is saying, we may end up in a situation where governments begin doing what they must rather than what they prefer. We may be entering a new administrative era. And Prime Minister replies, Hacker says, good. And so Humphrey replies, terrifying. And Bernard says, Prime Minister, shall we remain in this panel? They are about to discuss statutory mandates. And Humphrey replies, I do hope it’s only exploratory. With that word, may I now invite our distinguished panelist, Ms.

Rama Vedashree. May I request all our panelists to keep it for three to four minutes so that we can hopefully have another round. So, Ms. Vedashree, you’ve had a long and distinguished career pretty much designing these things and providing leadership. So my question to you is that when and how did the idea behind the national data sharing and accessibility policy and the open government data platform essentially germinate? Take us through the journey and also the challenges that you face through this.

Rama Vedashree

Sorry, I’m here.

C. Raj Kumar

I’m sorry I didn’t notice that. Take us through the journey and help us understand how the concept moved from its formative face to a reality. Thank you.

Rama Vedashree

So actually this open data moment was… It was a global movement. In India, actually it was former colleagues of cabinet colleagues of Mr. Tharoor, Mr. Kapil Sibal and Mr. Sachin Pilot when they were in the ministry. It started then and then the national data sharing access policy, I think around 2012. And industry also contributed to that entire draft and that policy. At that point of time, I think the entire focus was on opening up government data. And maybe some development data. So it was mainly government data. And then the data .gov .in platform came. What we need to take stock of is that entire open data movement and our own NDCEP policy and data .gov .in platform was built where to open up government data probably for research and other policy making.

Innovation, I mean opening up this for innovation, and I’ll start to. was not really a primary objective because it was in the pre -startups era and the pre -AI era. And that’s where I think now we need to really revisit that and make sure that we’re just not locked down by the old paradigm of open data because right now you need open data but which is also AI -ready open data, which is extremely important because when you look at an LLM or any other small language models, there are end users, there are professionals, there are researchers, everybody using that, prompting it, and they’re expecting the data. So in the past when the open data movement started, I think we were happy if government opened up by giving us a PDF or CSV file and we would figure out, download, put it in a spreadsheet and do our analysis.

Whereas now, the data needs to be truly open. always available and most importantly I think metadata and its standards are extremely critical for the interoperability of the data and we need to revisit how different segment of users of this open data are going to consume that data. We are now also in the, nobody wants to download and do something offline, right? They want to be able to consume the data through APIs and through apps and then of course the entire AI systems and we need to make open data available where not only end users like you and me can consume but both apps and AI systems can consume. I think that is where we have a challenge and having spent 35 plus years in the industry, I beg to submit that just opening up government data is not enough.

There is a lot of institutional data which is getting locked and siloed. I would like to call it daft data because nobody is using them. even in commercial enterprises and with regulators and nodal organizations like CERT. So when you look at cybersecurity startups, they really don’t care about what is there on open data .gov .in. They need a lot of data which is there with the nodal institutions of government. Similarly, with regulators, fintechs want data that is residing with NPCI. So we need to look at how do we open up this data.

C. Raj Kumar

Thank you so much, Vedashree. That was very concise and even compelling. Especially coming from a regulatory standpoint. May I invite Ms. Irina Ghosh, the Managing Director of Anthropic India. So Ms. Ghosh, my question to you is that as Anthropic deepens its collaboration and presence across India, could open data sharing frameworks help drive trust -first innovation and development in the Indian AI space? Is this relevant at all for AI developers such as Anthropic in making the AI models more secure, trustworthy, and well -suited for complex dynamic and regulatory development? And rapidly evolving markets such as India, Thank you so much.

Irina Ghose

It’s indeed a pleasure and a complete honor. And I really love the analogy and the follow up thereafter as well. Let me first begin by saying that I think all of us totally agree that AI for India is a generational opportunity in the context of the data, the demographics and the culture. Having said that, it’s not the question that is it the AI moment for India? It’s a question do we trust and do we want to make it the AI moment for India? And trust for all of us needs to be a verifiable outcome. Do we trust the data that we are putting in every click, every transaction, every decision which is triggered by the AI?

Is there an invisible filter or are we trusting it? So two parts to that in my mind. One is the data that we are collecting. How are we using it and making it available? For many more experimentations and innovations there. If the model is only being built on a western data or for financial institutions which is serving a different segment or a sector. it won’t be communal useful for all of us so few things that we need to do is make it contextual to the local language and the domain in the local language legal agriculture for the languages which are there in India that’s the first thing now how do we ensure that it goes across at scale that’s the second there are three things that we are doing first of all we are doing an economic impact survey index survey by which we are ensuring that we are really making data available for the way people are using it in India and a big round of applause to everybody out here because the highest usage of CLOD which is the tool anthropic users is from India so we have a great way of knowing as to what people are doing and we share it completely contextually as to what people are using it for that’s the first the second people will want to use the data and the context collectively don’t do it once but don’t rewrite code the analogy to that I would say is that when you had a mobile phone world you did not want to have a charger for different mobiles right the universal connector came across that solved all the problems so when you create, look at a farmer when he is wanting to use things there are 3 -4 kinds of data, the market index, the soil data, the irrigation data, if you try to pull in data every time and make it work, it’s gonna fail so a model context protocol, MCP as we call it was created by Anthropic in 2024 and we put it across to the Linux community, anybody and everybody can use it so that once you create an AI layer on top of that, people can pull that data why is it contextual for India?

There is a lot of data which is lying across in agriculture, health, education and like Rama called out in institutions and we are working along with the collaborations of all the players, Google, Anthropic, Microsoft, every single one everybody else put together and when the Honourable Prime Minister called out the manifesto, we are ensuring that we make data transparently available. We are also committing that we will build it across for use cases in the sectors which mean the most to India so that we emerge and make it the AI moment for India.

C. Raj Kumar

Thank you, Ms. Gose, for really giving that perspective. Now may I invite Mr. Cyril Shroff who is of course the convener of this panel but also managing, founding, managing partner of Cyril Amachal Mangaldas and the benefactor of the Cyril Shroff Centre for AI Law and Regulation. Mr. Shroff, in your view, might a clearer regulatory framework be necessary to ensure more consistent, effective and systemic data sharing by government bodies? The clarity that we need from you is that the role that a regulatory framework can play in institutionalizing incentives and accountability and putting in place initiatives. The role that a regulatory framework can play in institutionalizing incentives and accountability and putting in place initiatives. The role that a regulatory framework can play in institutionalizing incentives and accountability The role that a regulatory framework can play in institutionalizing incentives and accountability The role that a regulatory framework can play in institutionalizing incentives and accountability and putting in place initiatives.

courts. I say this because of the fact that most of the time, lawyers come to the party very late. The technology is so fast and things get done and when shit hits the roof, to put it bluntly, lawyers are asked to clean it up. Should we do it differently?

Cyril Shroff

innovation and regulation, and the regulation here is intended actually to create the foundation stone for innovation. If data was systematically available in a usable format, AI -ready format, that would actually spark a lot of innovation and create the foundation for it. So I think that the short answer, as I said, is yes. And I think just to build on Dr. Tharoor’s point on data sovereignty, I think the Prime Minister said it well. And I think he had probably the saying, when he actually said that, and I think as India we need to assert that right. All the data is largely in the global south, and all the companies and the private sector and the usage is largely in the global north.

I think we need to assert ourselves. I think that’s exactly what Dr. Tharoor said, and I’m a great fan of that. and it partly kind of explains why in a personal philanthropy level I created this center because lawyers come late to the party but some lawyers don’t. So I think this is what I expect from your center Raj. So I’ll stop there. I think we have a lot to come.

C. Raj Kumar

Lawyers have another quality. Put the blame on somebody else when things are not happening as much. That is known as good management. Thank you so much Cyril for that because I think it’s important for us to recognize that while we are indeed attempting to frame regulation we also should not stifle growth and innovation because that’s the biggest death knell that we can sound towards a lot of entrepreneurship that’s emerging. May I now invite Dr. Sasmit Patra. Sasmit you are a distinguished member of parliament and of course you have straddled across the world of policy making and even academia. How can greater availability of reliable public data lead to stronger evidence -based policymaking and more efficiently delivering public goods?

In fact, the real question is the criticality of data in identifying relevant areas of policy intervention by the state for designing public policy instruments and frameworks so that targeting to the relevant stakeholders. How do we do that?

Dr. Sasmit Patra

Thank you, Raj. It’s a very important question because I’ll take it in two parts. The first part is whether data is important to policymaking. Yes, it’s a no -brainer. Second is, in a federal structure, the problem is data is in silos. Let’s say I come from the state of Odisha. So the data of our farmers in our local Kalia Yojana would be in a different format and probably kept differently than probably the PM Kisan data that is there by the federal government. secondly how can this be useful I’ll come to the second part let’s say crop loss Pradhan Mantri Fasal Bhima Yojana is something which is a crop loss provision that is given for reimbursement or compensation for crop losses for farmers in this scenario what happens is if the data is readily available with the government the government can predict that over the next 1 to 2 years which are the districts which are the taluks which are the blocks which are the panchayats where crop losses have been happening over a period of time so predictively AI can bring about solutions to probably A.

try to find out the reasons for crop loss B. try to mitigate the losses C. try to strengthen the farmers for crop diversification and D. try to generate a new form of mitigation plan that can be implemented by the government in those in order to do that you need data without that data it is not possible I’ll come to the last part where the government will actually have a problem it’s a political question The political question And I’ll play the politician here The reason is When the government says I’m going to share data It’s a data of 140 billion 1 .4 billion people right How many of you sitting in this room here Are willing to share your data Through the government Anonymize the data That’s the question I’m trying to put to you Let’s say tomorrow government Comes up with a regulation and says I want to share that data A trust and verifiable data My citizens data Not the farmers data Who is nameless and faceless At Bharat Mandapam The movers and shakers of Delhi data Is going to be now released For training of LLMs and micro LLMs Are you happy sharing that data That’s where the catch is That’s the political question The regulatory question is Yes there has to be the data The policy question is The data is needed for better policies But see as citizenry The data is needed for better policies How many of you sitting in this room Are comfortable sharing the UPI transaction that you do.

That, even if in an anonymized question, will always remain. So the answer starts with you and ends with you as a citizen.

C. Raj Kumar

Thank you so much, Sir Sussman, for that very important question. I think it’s important to recognize that the heart of it is about to what extent citizens are prepared to trust the government. And trust factor becomes critical here. Let me quickly move to Miss Vedashree. We’re doing very well on time, so thank you for all the panelists to respond with short responses. So Miss Vedashree, why in your view have proposals such as India data accessibility and use policy and the national data governance framework policy well -intended policies, but have not really moved forward? Why it has been the case that these government interventions being only at the policy level, lack regulatory enforcement?

Rama Vedashree

So the first thing, I think, we need supply -demand gap assessment, because maybe government is opening up or throwing up some data on the data or the government in platform. who are the users who are consuming it and the ministries who are anyway overloaded with so much work and if they need to manage this and regularly submit all the data sets in an open format, they need to see what will come out of it, which means we need to tie it up with researchers, we need to tie it up with inventors and innovators. I think that did not happen so far. Whereas now if you really look at an AI -ready data, sets, she talked about there are open standards so that interoperability, she talked about the protocol, MCP protocol.

I think we now need to look at what data sets are needed for research, which could be academia and research students and for industry, which could be all the startups. Unless we map that and revisit what is the necessary policy and government data will be useful, development data, which even World Bank. throws up a lot of data in the open data sets. Those development data is also equally important for policy making. But if you’re looking at it, opening up data repositories, dark data as I call it, for innovation purposes, I think we need to look at how do we open up commercial data in a secure, anonymized way. There have been some steps, sir. For example, the payment systems directive in UK.

Now EU, who’s always been about extreme of protecting data, is now talking of FIDA, which is a financial data access, where they’re saying at a sectoral level, how do we open up the data access? Payment systems directive and open banking initiative was that. Similarly, healthcare data, hopefully the Aishman Bharat mission will open up. So I think we need to look at the supply -demand gap, what data will be consumed by which segment of users, and open up those data sets. Otherwise, I don’t think we will move anywhere.

C. Raj Kumar

Thank you so much, Ms. Vedashree. I haven’t forgotten you, Arun. You’re, of course, our own. Arun, of course, is a partner at Cerebral Medicine, Mangadas. If India were to move towards a more structured legal framework for open data sharing, which core principles and safeguards could shape its design? Is it all about design thinking so that government bodies requiring them to share the data have aggregated data sets on a free, paid or restricted basis with voluntary private participation on the supply side and, of course, safeguards to prevent misuse?

Arun Prabhu

Thanks, Raj. What I lack in the eminence or erudition of my fellow panelists, I will try to compensate very inadequately with a certain radicalism. Not the radicalism of rhetoric, but the radicalism of making bold suggestions as to the minimum viable proposition of a sustainable open data. Thank you. And by positing that the lack of open data sharing that several of the key panellists, including the keynote, have called out has arisen due to a lack of a durable legal architecture. Today, in India, despite having a Digital Personal Data Protection Act, episodic intermediary regulation, as well as several policy and practical initiatives on the sharing of non -personal data, we do not, as the world’s largest democracy, have a clear identified anonymisation standard, clear identified public data interchange standards.

We do not have a clear recognised purpose for the processing of open public data sets for public good and public improvement. This means that any initiative, particularly large complex initiatives like large language models and their deployment, which are multi -decadal, multi -billionaire. These multi -billion dollar investments are open to the travails of both judicial storms. executive weather patterns and perhaps most importantly legislative climate change a government official who creates an open data repository has to risk that in 5 years his action may not only be frowned upon but be downright illegal a founder betting his life on creating the next generation of open data architecture and applications has to risk at some point that his business becomes fundamentally unviable I submit to you that absent these 4 key important elements which work coherently not only with existing architecture but also the constitutional principles which have been laid out in the Puttaswamy judgment which continue to pervade our democracy until that architecture is enacted in legal legislative form in a way that does not rub up against the various pieces of isolated sectoral regulation we have across individual regulators we will not have a sustainable open data ecosystem Thank you.

C. Raj Kumar

Thank you so much, Arun. That was fantastic. Spoke like a true lawyer. Cyril, quickly, we have a few minutes left, and we have concluding remarks as well. From your vantage point, how can greater availability of reliable public data influence investor confidence, efficiency of markets, and long -term economic growth? In many ways, this is also a moment for India to showcase its potential for attracting investors to believe in both the government and their investments to have the right results.

Cyril Shroff

I’m going to answer your question with an analogy. One of the hats that I wear is also as a capital markets lawyer. And I’ve seen how the growth of India’s capital markets from a very restricted, basic kind of, you know, at a very fundamental level to what it is today as one of the most vibrant capital markets in the world, last year. India had 25 % of all global IPOs even more than the U .S. And why did that happen? That happened for a variety of commercial reasons, but also the fact that we have a very vibrant capital market regulatory system in place. It has taken 25, 30 years to get us to this point. But a lot of it is about having regulatory clarity.

It is about having the right enforcement. It is about having good accounting standards. It is about just uniformity in regulatory language that is used, at least the same vocabulary, which is if you can just substitute the word capital market by the data and the digital world, I think you get to the same answer. So I think the answer lies, therefore, in that if you want to create trust in the community, if you want the multibillion -dollar investments, if you want data centers to be set up here, if you want us as a country to move from a – from a services -based tech sector to a product -based tech sector and – it may be different parts of the digital world, I think you first have to create trust, and a trust cannot happen without transparent information and a reliable legal and policy system.

Now, one of the things that we get periodically hit on the head with is the fact that your dispute resolution system is too slow, it takes 30 years to enforce a contract, blah, blah, blah, and something which we take disproportionate stick for. But I think a lot of it ultimately comes down to can you trust your legal system. And I think the answer is if we are able to create that right regulatory policy and enforcement framework for this, which kind of answers your question, I think we would have solved it. It’s not going to happen otherwise. There’s no point having a law which you can’t enforce.

C. Raj Kumar

All right. Thank you so much, Cyril. I have enough sign language indications to say that we have another 10, 12 minutes, so I am proceeding forward. Shashmit, quickly to you, are there any geopolitical concerns that need to be addressed if open data sharing practices by the government are to be scaled up in India? Should that be something that we need to be concerned, especially because you’re sitting in the parliament and there are, of course, opposition parties. really coming forward to question and challenge the government on this matter as well.

Dr. Sasmit Patra

You know, in fact, when recently the US -India trade deal happened, then you had a lot of energy being seen in the parliament and outside. So sharing of data and the method by which we share data is of course a geopolitical concern for the country. So maybe we can look at data, as Madam just said, that one is the data that is for the public good and the humanity. The second data is restricted and probably national security. And the third data is something that can be monetized and commercially useful. So therefore, instead of probably putting the entire data set within one silo, we can probably look at the usage and thereafter tag them to the respective so that the multi -billion dollar innovator also benefits, the regulators also benefit, the citizenry also benefits, and finally the policies for the farmers, the Anganwadi workers, the ASHA workers, also get done.

Last point, and I just want to put that on record because I’m on the… the Parliamentary Oversight Committee on Communications and IT, and Dr. Tharoor was the earlier chairman and my distinguished colleague there as earlier. One of the critical areas that we at least are discussing and debating is not a very hard, strong EU AI Act. I don’t think that’s happening anytime soon. We’ll have a regulatory framework, and the key word is soft -touch regulation. Where does that take us has to be seen.

C. Raj Kumar

Thank you so much, Sushmit. We’re very fortunate to have both Shashi Tharoor and Sushmit Bhattabhai, former chair of the same Parliamentary Committee, and Sushmit now a member. May I quickly invite Ms. Vedashree to do a one -liner concluding response, especially as you look at from this vantage standpoint, how do you think the future is going to evolve, especially in the light of India wanting to play a global thought leadership role? I think this summit is demonstrating that. Our aspirations of positioning ourselves.

Rama Vedashree

as, you know, the AI leader of the world. We are working towards that. So I would say when you link it to the topic, I think we need a very concerted data strategy at a government level. There were some efforts when the personal data protection bill was being debated upon. There was also a parallel one around non -personal data framework. So I think we need a national level data strategy because now we need to look at it from the current to the next five years. How do you open up? Sir talked about, you know, that data needs to be different segments. I also believe that we cannot have one centralized open data repository. Data needs to be federated.

We also need to think through, along with the sectoral regulators, what will be the sectoral data opening up policies because that’s where a lot of… data that can be monetized and innovation can happen. So we need to look at that at a sectoral level and at a government level and how do we create this federated open data strategy which is ARAD.

C. Raj Kumar

Thank you so much. Your one -liner, Cyril, one -liner about what should India be doing as we look at the future?

Cyril Shroff

Not copying the West.

C. Raj Kumar

Good one, good one. Alright. May I invite Ms. Irina Gross for you to especially from the standpoint of anthropic but also the private sector which is expecting a huge presence in India.

Irina Ghose

Yeah, I think the last mile between making AI real is the diffusion which has to happen between the frontier firm which is creating the model and the person on the last mile who is needing it and that’s the thread of trust. Now the thread of trust needs to be woven by the contextual data in the context of India and ensuring that we are making it both open, accessible and ensuring that everybody is contributing to that grid.

C. Raj Kumar

Thank you so much, Arun. Over to you. Your one -liner.

Arun Prabhu

The absence of a legal framework goes from being an inconvenience to an impediment in the development of a sustainable data economy We are at the point where India’s existing regulatory framework is making that transition Thank you so much

C. Raj Kumar

We have now come to almost the end of this panel but we have a very distinguished panelist Ms. Asha Jadeja Motwani She has been silently and quietly listening to everybody but she is at the heart of India -US relations but also somebody who has been a venture capitalist investing in tech companies, innovation as well as AI in India and in the United States She has been working hard to build that relationship but also has been a benefactor just as Cyril established the Center for AI Law and Regulation Ms. Motwani established an endowment at our university where we have established India’s first Motwani Jadeja and Ms. Asha Jadeja Institute for American Studies So may I invite you to share some reflections having been part of the Global AI Summit and of course this particular panel Over to you Ms.

Jadeja

Asha Jadeja Motwani

Yeah, thank you, Raj, for inviting me. And so the one thing that I want to, you know, stress heavily is that, look, we are built on an American stack, you know, and we want to make sure that, you know, one of the things I heard in the AI summit was this question of sort of what if, you know, what if America at some point becomes more of a hostile entity and pulls its APIs? Will we be stuck? You know, will we be in a situation where we don’t know how to handle it? I think that question is something that we must think about and know how to deal with it if it happens. But I don’t think it’s likely to happen.

We will actually have to make a decision and say that we have consciously chosen to be on the American stack. From the, you know, from the chip level all the way to the top. and so if we consciously make that decision then at a policy level and even probably even at the legal level what you guys will need to figure out is that if we have decided to work with the Americans on this and put our eggs in that basket then have a joint regulatory framework so that we are never conflicting with them number one number two also to make sure that we don’t get a situation where we are holding back data because remember the AI revolution is all about training the new models training these new entities that are going to be a doctor in our pocket for training those things we need to make sure that our data which is the health data for example Indian health data is open and accessible to those in the west who are developing these programs these models so it’s critical to know that it’s a fine balance this is not like the internet business this is not like the internet And, you know, we had to worry about, you know, who is going to do what with that?

Are they going to pump ads at us? This time, it’s much more about what are these things going to give back to us once they have that data. So it’s a tricky balance. And I think we will need to make that decision about do we trust the Americans and do we trust the American stack? And if so, how do we proactively work with them so that their hands are also tied just like our hands would be tied?

C. Raj Kumar

Thank you so much, Mashaji. In fact, it’s also important for us to recognize that it is also the bedrock of TUS is based upon should India be working with democracies? Should India be working with countries with more shared values and societies which largely, you know, recognize the importance of rule of law and democratic institutions? And that’s where the India -U .S. relations lie. we have of course had a wonderful panel discussion but as a professor I will not let our panelists leave without an audience question we have a few minutes left so I request if anybody, let’s have the mic to the lady here in the second row keep it very short, I’m going to collect three questions and have our speakers respond

Audience Member 1

thank you for giving the opportunity, I’ve been dying to ask this there have been a lot of sessions where we have been talking about having a regulatory framework on AI and having independent assurances but the way things are right now even the auditors, the regulators etc are heavily reliant on AI so who would be watching the watchers?

C. Raj Kumar

Good one, good one, Cyril you who would be watching the watchers alright, next question the standing people, let’s give the standing person mic here they’ve been standing for long

BK Patnaik (Audience Member 2)

I’m BK Patnaik from Orissa I am asking the question to Mr. Patra that you can ask in Odisha itself anyway Mr. Patra what he has told that he will change the farmers in India with the AI data will it be successful I am asking to Mr. Patra

C. Raj Kumar

last question there and another to Mr. Tharoor one one one just you can raise your voice go ahead

Audience Member 3

so how you got a measure that started for men exclusively it’s called Dora Health exclusively for men now my concern that I want to share on this panel where we are talking about regulatory framework for AI is that I have seen that it’s very difficult to get information for men specifically to work on techniques that can help them governments have failed to search on it companies have failed to search on it even third party cookies that are shared don’t really look into this specific aspect and men’s mental health is just one of them so what do you recommend that the government would is is still as a regulatory framework is still as a regulatory framework is still as a regulatory framework is still as a regulatory framework is still as a regulatory framework is still as a regulatory framework and and and is still as a regulatory framework ensure that such precise data is given into the right hands

C. Raj Kumar

Got you. Let’s have Mr. Vedashree answer that question. But Cyril and Sasmitia.

Rama Vedashree

Yes, yes, yes. So I think you raise the right question. So this is where this, we are not actually discussing regulation of AI. This was around open data. Just to clarify. So I think the sectoral data that when I talked about, when it’s healthcare data, you’re talking about mental health data. But it’s very rarely that personally identifiable data will ever be opened up. I don’t think it will ever be opened up. And that is important. But this is where I think, for example, in Germany, there is some healthcare act in which they’ve given a provision where patients can choose to share, ask their healthcare institution to share some specific data. That maybe I will, let’s say I’ve had some critical illness.

I’m willing to share everything anonymized so that it goes for research. So I think we need some progressive regulations and also education and awareness. Only regulation will not open up this data.

Audience Member 3

If I may just add something to it. The reason I said regulation is because I’m actually working on an AI pilot system right now where it can analyze the charts that take place between a man and a confidant that works for us. So after analyzing this chart, we have a very procured list which is non -sensitive data that can be shared. So what I’m asking is as a government, what is the regulatory framework that can be put into place so that this non -sensitive data can be shared for research?

Rama Vedashree

So there is some discussion going on. The guidelines came from Ministry of Electronics and IT around AI. So we can expect some movement around it. This could be taken offline as well.

Dr. Sasmit Patra

Dr. Patnaik, lovely shades. As far as … AI is concerned, I think yesterday when the Honorable Prime Minister inaugurated, he said, we are looking at humanist AI. I think that is the cornerstone, inclusive and humanist. If it doesn’t solve the problems of the farmers, doesn’t solve the problem of the healthcare workers, doesn’t solve the problems of the tribals in the state of Odisha and elsewhere, then how is AI going to benefit humanity? So therefore, the Indian AI thought leadership, so to say, comes from the Indian concept of Vasudeva Kutumbakam. The whole world is one family. We are humanist, we are inclusive, we want our AI to help everyone.

C. Raj Kumar

Thank you so much, Sasmit. Over to you, Sir.

Cyril Shroff

So I’ll take your question. The question was who is going to mind the watchers? Or who is going to watch the watchers? And I think the answer lies in our constitution. The answer has been there for the last 75 years, the courts and the rule of law. We are, actually, I think we have the best rule of law system in the world. Earlier I used to give a credence to the United States for that but everything that has happened in the last 18 months has shown that that’s not true and their legal system can be arm twisted, can be pressured lawyers can be pressured, all of that that can never happen in India we actually have a much better democracy and even though it may be clumsy in the way we sometimes go about it and it may be frustrating, there is only one answer in India which is the courts and the second answer I think is ethics so one of the, and AI is going to need a completely different ethics code about biases, about so many things, it may not be law but it is something which the industry will have to evolve for itself there are number of topics around which and I think we are working on that in the center, so these two answers, these two themes will provide the answer to how are you going to regulate all of this, the second one is a bit more ambiguous because ethics conversations always are more amorphous but the courts like it or not finally they will be the one Putuswami is an example of that and there are so many similar examples finally the course always comes through there hai andher nahi hai

C. Raj Kumar

thank you so much sir we started with the first word by Shashi Tarur we will have the last word by Shashi Tarur

Shashi Tharoor

that’s been a fascinating discussion I think some of the questions raised and perhaps the ones that weren’t raised are already pointing the way to some of the further areas we need to converse about but we also have to be very realistic and anchored in what we are talking about when our friend from Orissa asked that question about agriculture I like Sasmit Patra’s answer as an aspirational answer but I am very conscious that even as the finance minister announces a special budget provision for AI in agriculture that the vast majority of our farmers can’t afford tractors can’t afford tillers don’t have pumps don’t have guaranteed sources of water and in many cases no 24 hours of electricity and in those circumstances AI how is it going to be applied, how many farmers will it reach and how in how many ways will it transform agriculture.

I think humanist agriculture, humanist AI is an laudable goal but we have to relate it to the reality of our own people and the circumstances in which we’re in. I mean I would love to agree with Siddharth Shroff on pretty much everything he says but with our legal system we face for example the undoubted fact that your judiciary needs AI to begin with. I mean you’ve got 5 million pending cases in this country. How can we celebrate the rule of law?

Cyril Shroff

50 million.

Shashi Tharoor

50 million. So how can we celebrate the rule of law when justice delayed and that old cliche is justice denied. So again let’s anchor all this into the real world. When you speak of men’s health or anybody’s mental health for that matter, it seems to me you’re touching on an extremely important issue. But a lot of this stuff is the circumstances that the person is confiding into a doctor, in your case, into a chat, how much of that can be anonymized truly effectively, how much of it can be traced back, how much of it can cause confidentiality breaches. The purpose of health data aggregation ought to be to solve similar problems for other people. In other words, let’s say doctors in the West may have access to 10 instances of a rare disease, and in India there may be 1 ,000 instances of the same disease.

So if we had AI data in India that aggregated all that, then certainly the West might have the scientific technology to research it and come up with a cure or a better cure or whatever that can be applied in India. But for all that to happen, we need regulations. We need to figure out if there’s monetization, who benefits, if the data is given on what. What terms, how do we have a law that says what will come back to us, etc., etc., etc. It would be absurd if those 1 ,000 Indian cases added to the 10 Western cases to create a proprietary. fighting into a doctor, in your case, into a chat. How much of that can be anonymized truly effectively?

How much of it can be traced back? How much of it can cause confidentiality breaches? The purpose of health data aggregation ought to be to solve similar problems for other people. In other words, let’s say doctors in the West may have access to 10 instances of a rare disease, and in India there may be 1 ,000 instances of the same disease. So if we had AI data in India that aggregated all that, then certainly the West might have the scientific technology to research it and come up with a cure or a better cure or whatever that can be applied in India. But for all that to happen, we need regulations. We need to figure out if there’s monetization, who benefits, if the data is given on what terms, how do we have a law that says what will come back to us, et cetera, et cetera, et cetera.

It will be absurd if those 1 ,000 Indian cases, I added to the 10 Western cases, to create a proprietary… AI software that the 10 ,000 people here can’t afford to benefit from. So we need to create models. That’s the point that I was trying to make when I talked about the digital Raj, but we have here a moderating Raj on action, so I better stop. Thank you all for listening. Thank you so much. Thank you.

C. Raj Kumar

Thank you, Arun Prabhu, Irina Gross, Asha Jaliza Motwani, Shashi Tarur, Cyril Shroff, Saswit Batra and Ms. Vedashree. I want to particularly thank Shashi who came for the keynote address and was planning to leave in 10 minutes, but he stayed through the entire panel. So give the entire panelist and Shashi a big round of applause. Thank you to Asha who agreed this morning. So thank you very much.

Related ResourcesKnowledge base sources related to the discussion topics (18)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“C. Raj Kumar staged a role‑play featuring UK Prime Minister Jim Hacker and senior civil servants discussing open‑data, including a move from voluntary initiatives to a statutory regulatory framework.”

The knowledge base describes the fictional Prime Minister Jim Hacker from “Yes Minister” and includes a dialogue where the Prime Minister and Sir Humphrey discuss the risks and benefits of statutory open-data mandates, matching the reported role-play scenario [S71] and [S4].

Additional Contextmedium

“Shashi Tharoor defined open data as “data that is made accessible for use, reuse and redistribution with minimal legal or technical barriers”.”

The knowledge base characterises open data as a public good that is non-excludable and non-rivalrous, emphasizing minimal barriers to access, which adds nuance to Tharoor’s definition [S76].

Additional Contextmedium

“Rama Vedashree warned that legacy static CSV/PDF releases are obsolete and that modern AI requires AI‑ready open data, always available via APIs with metadata and standards.”

A discussion in the knowledge base highlights the need for an AI-Ready Data Framework, including machine-readable catalogs, metadata, and standardized APIs, supporting Vedashree’s point about AI-ready data [S81].

External Sources (82)
S1
https://dig.watch/event/india-ai-impact-summit-2026/regulating-open-data_-principles-challenges-and-opportunities — It is a question of power, shaping sovereignty and surveillance, innovation and inclusion, freedom and fairness in our d…
S2
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Ashwini Vaishnaw- Role/Title: Honorable Minister (appears to be instrumental in India’s semiconductor industry developm…
S3
Open Forum #58 Safety of journalists online — Audience: Hello. Thank you. I’ve listened to a lot of conversation. By the way, a wonderful insight. I’ve enjoyed i…
S4
Regulating Open Data_ Principles Challenges and Opportunities — BK Patnaik (Audience Member 2): Dr. Patnaik, lovely shades. As far as … AI is concerned, I think yesterday when the H…
S5
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 2- Abhinav Saxena, Consultant at Capacity Building Commission, Government of India -Audience member 3-…
S6
How AI Is Transforming Indias Workforce for Global Competitivene — -Pragya- (Role/title not specified, mentioned briefly at the beginning) -Sangeeta Gupta- Panel moderator (role/title no…
S7
Keynote Address_Revanth Reddy_Chief Minister Telangana — -Participant: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or organizer…
S8
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S9
ISBN: — One-third of all Internet users today are children. Soon with the expansion of connectivity in the near future, every se…
S10
Multilateralism under Challenge? Power, International Order, and Structural Change — Under Challenge? Power, International Order; and Structural Change Edited by Edward Newman, Ramesh Thakur and John Tir…
S11
Regulating Open Data_ Principles Challenges and Opportunities — Last point, and I just want to put that on record because I’m on the… the Parliamentary Oversight Committee on Communi…
S12
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S13
Enhancing CSO participation in global digital policy processes: Roles, structures, and accountability — Audience:My name is Horst Kremers from Berlin, Germany. And during my so-called working lifetime, I also worked with UNI…
S14
Preface — Greg Aaron Joe Abley Jaap Akkerhuis Don Blumenthal Lyman Chapin David Conrad Patrik Fältström Jim Galvin Mark Kosters Ja…
S15
https://dig.watch/event/india-ai-impact-summit-2026/regulating-open-data_-principles-challenges-and-opportunities — Thank you so much, Vedashree. That was very concise and even compelling. Especially coming from a regulatory standpoint….
S16
Keynote-Dario Amodei — – Irina Ghos: Managing Director for Anthropic India, has three decades of experience building businesses in India (menti…
S17
Building Population-Scale Digital Public Infrastructure for AI — – Irina Ghose- Esther Dweck – Nandan Nilekani- Irina Ghose
S18
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 1- Founder of Corral Inc -Audience member 6- Role/title not mentioned
S19
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S20
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S21
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — -Arun Pratihast: Senior Researcher at Wageningen University Environmental Research -Speaker 5: Role/title not mentioned
S22
https://dig.watch/event/india-ai-impact-summit-2026/regulating-open-data_-principles-challenges-and-opportunities — Thank you so much, Ms. Vedashree. I haven’t forgotten you, Arun. You’re, of course, our own. Arun, of course, is a partn…
S23
Announcement of New Delhi Frontier AI Commitments — -Abhishek: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified…
S24
https://dig.watch/event/india-ai-impact-summit-2026/regulating-open-data_-principles-challenges-and-opportunities — It is a question of power, shaping sovereignty and surveillance, innovation and inclusion, freedom and fairness in our d…
S25
Launch / Award Event #57 Governing Identity Online Nations and Technologists — Benjamin Akinmoyeje: Thank you. Good morning, everybody, and thank you for the opportunity to have me here. So I’m going…
S26
Global Perspectives on Openness and Trust in AI — – Alondra Nelson- Audience member 3
S27
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 3- Student -Audience member 6- Role/title not mentioned
S28
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S29
Social Innovation in Action / DAVOS 2025 — – Raj Kumar: President and Editor-in-Chief of DevX Raj Kumar: I think just the fact that a minister of industry and t…
S30
Africa’s Prospects in the New Global Economy: A Comprehensive Analysis from Davos — Hello and welcome, all of you in the room here in Davos, everyone who’s following this conversation virtually. I’m Raj K…
S31
Fast-tracking a digital economy future in developing countries (UNCTAD) — Building a conducive legal framework and endorsing e-commerce laws are crucial for attracting investment and ensuring a …
S32
WS #162 Overregulation: Balance Policy and Innovation in Technology — Regulation is necessary but should not stifle innovation
S33
Opening — Balance needed between innovation and regulation
S34
Keynotes — Marianne Wilhelmsen: but as Norway prepares for the upcoming IGF 2025, I look forward to welcoming many of you in June a…
S35
Opening and Sustaining Government Data | IGF 2023 Networking Session #86 — Advocacy for open data is taking place in the Maldives, where Women in Tech Maldives is playing a significant role. This…
S36
AI and Digital in 2023: From a winter of excitement to an autumn of clarity — At thetechnical level, data needs standards in order to be interoperable. Here, the work of standardisation and technica…
S37
Democratizing AI Building Trustworthy Systems for Everyone — Not all data can be open, but exchangeable and shareable data frameworks are needed
S38
The Foundation of AI Democratizing Compute Data Infrastructure — Thank you. So I think two characteristics of digital public infrastructure, which are key, are to ensure that not only t…
S39
Driving Indias AI Future Growth Innovation and Impact — The innovate side really comes down to. Areas like skilling, which I know when Minister Chaudhry joins us, we will get i…
S40
Leave No One Behind: The Importance of Data in Development | IGF 2023 — The speakers emphasized the significant impact of data as a crucial driver of economies, often referred to as the “new o…
S41
ORF publishes study on India`s Open Data Initiatives — The Observer Research Foundation (ORF) has published an in-depth study of India`s Data Initiatives. The report named”To…
S42
India to launch national data governance policy — Indian finance minister Nirmala Sitharamanannounced that the government is working on approving a national data governan…
S43
HIGH LEVEL LEADERS SESSION I — Data can drive innovation, provide economic opportunities and impact future generations.
S44
The Digital Town Square Problem: public interest info online | IGF 2023 Open Forum #132 — It calls for the popularisation of the African Union data policy framework and the ratification of the Malabo Convention…
S45
Setting the Rules_ Global AI Standards for Growth and Governance — Implementation requires interoperable and modular standards ecosystems to avoid reinventing approaches for each sector o…
S46
Facilitating an integrated approach to digital issues — Speed: In a world where communications have become instant, implementation of solutions must be made in phases, so that …
S47
AI Governance Dialogue: Steering the future of AI — Infrastructure | Legal and regulatory Martin argues that high-level policy commitments must be accompanied by detailed …
S48
Welcome to the IGF2021 Final report! — Cooperation needs to take place in a myriad of areas, from investment in technology and skills to the development ofsoun…
S49
WS #133 Platform Governance and Duty of Care — These key comments fundamentally shaped the discussion by introducing three critical analytical frameworks: (1) the impo…
S50
Trusted Connections_ Ethical AI in Telecom &amp; 6G Networks — “There has to be trust, there has to be some amount of regulation, there has to be some amount of safety that comes with…
S51
How Trust and Safety Drive Innovation and Sustainable Growth — And an organization like the ICO is there for both sides to see, well, there’s someone actually overseeing that. And tha…
S52
Regulating Open Data_ Principles Challenges and Opportunities — “They are asking the real question, should there be regulatory teeth so that government data sharing isn’t optional good…
S53
ORF publishes study on India`s Open Data Initiatives — The Observer Research Foundation (ORF) has published an in-depth study of India`s Data Initiatives. The report named”To…
S54
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — Regulatory frameworks are needed to reap the benefits of data while protecting citizens.
S55
Opening and Sustaining Government Data | IGF 2023 Networking Session #86 — Advocacy for open data is taking place in the Maldives, where Women in Tech Maldives is playing a significant role. This…
S56
HIGH LEVEL LEADERS SESSION I — Data can drive innovation, provide economic opportunities and impact future generations.
S57
Data free flow with trust: a collaborative path to progress (ICC) — The free flow of cross-border data is considered vital for the global economy. It is estimated that by the end of 2023, …
S58
Data governance — In the second case, openly available data (such as data from social networks) might be used by foreign entities in ways …
S60
Collaborative AI Network – Strengthening Skills Research and Innovation — Data readiness, interoperability, and standards
S61
Embedding Human Rights in AI Standards: From Principles to Practice — Industry adoption is key – standards must be practical and focused on sectors/use cases that will actually be implemente…
S62
Regional Leaders Discuss AI-Ready Digital Infrastructure — The discussion highlighted that AI infrastructure development must be understood as part of broader development strategi…
S63
What is it about AI that we need to regulate? — The question of achieving interoperability of data systems and data governance arrangements across different stakeholder…
S64
Setting the Rules_ Global AI Standards for Growth and Governance — Implementation requires interoperable and modular standards ecosystems to avoid reinventing approaches for each sector o…
S65
How Trust and Safety Drive Innovation and Sustainable Growth — Fantastic. Yeah, the importance of having watchdogs, yeah, entities that are watching and observing, commenting, enforci…
S66
Leaders TalkX: Moral pixels: painting an ethical landscape in the information society — The regulatory framework needs to be robust and reinforced
S67
Driving Indias AI Future Growth Innovation and Impact — Trust, Governance, and Regulatory Framework
S68
Diplomacy in beta: From Geneva principles to Abu Dhabi deliberations in the age of algorithms — Governance must extend across the full AI lifecycle: pre-design, design, development, evaluation, testing, procurement, …
S69
Ministerial Roundtable — Strategy built around four pillars: Governance and Ethics with Clear Regulatory Standards and Human Oversight
S70
Ministry of Communications &amp; Information Technology — Roles and responsibilities as they pertain to Information Security are outlined below: | Role Description …
S71
‘Yes Minister’ as the novel Turing Test for advanced AI — “Yes Minister” chronicles the exploits of Minister Jim Hacker, his secretary Bernard, and the chief bureaucrat Sir Humph…
S72
Rolex diplomacy — Therecipientsare oftendemocratically elected politicians,senior civil servants, ormilitary generalswho possess influence…
S73
Elon Musk and UK PM Rishi Sunak delve into AI safety, China, and the future of work at AI summit — Elon Musk, Tesla and SpaceX CEO, and Rishi Sunak, the British Prime Minister, had a wide-ranging conversation on AI, Chi…
S75
NRIs MAIN SESSION: DATA GOVERNANCE — Artificial Intelligence depends on the data system, which has to be balanced Furthermore, it is noted that support for …
S76
When Technology Meets Humanity — Data can have various legal statuses. It can be a public good by being both non-excludable (anyone can access it) and no…
S77
General Assembly — contribute to the development of the shared environmental information system of the European Environment Agen…
S78
Seismic Shift — However, in a move welcomed by Silicon Valley companies, the draft backs off from the more aggressive data localization …
S79
Indias AI Leap Policy to Practice with AIP2 — Brando Benefi, co-reporter of the EU AI Act, argued that voluntary ethical frameworks alone are insufficient. “If you su…
S80
WS #208 Democratising Access to AI with Open Source LLMs — Melissa Muñoz Suro: So basically, building on what I was mentioning earlier about our national AI strategy back in the D…
S81
Safe and Responsible AI at Scale Practical Pathways — -AI-Ready Data Framework and Standards: The panelists emphasized the need for a unified framework to define what makes d…
S82
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Adham Abouzied: Yeah, I think it’s very interesting. I would reiterate again what I’m saying. I think, yes, smaller focu…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
C
C. Raj Kumar
6 arguments137 words per minute2245 words980 seconds
Argument 1
Legal backbone ensures uniform participation and investor confidence (C. Raj Kumar)
EXPLANATION
Raj Kumar argues that without a statutory framework for open data, participation by government agencies is uneven, leading to investor uncertainty and developer frustration. A legal backbone would standardise data sharing, providing consistency and confidence for investors.
EVIDENCE
In his opening scenario he describes a discussion about moving from voluntary open-data initiatives to a statutory regulatory framework, noting that without legal mandates “participation is uneven”, “investors get nervous” and “developers complain about unreliable data sets” [11-20].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External sources stress that statutory mandates are needed to avoid uneven data sharing and to build investor confidence, highlighting the risk of voluntary approaches and the role of a clear legal framework in attracting investment [S4] [S31].
MAJOR DISCUSSION POINT
Need for a statutory regulatory framework for open data
Argument 2
Regulation must be balanced to avoid stifling innovation and entrepreneurship
EXPLANATION
Raj Kumar warns that overly strict regulatory measures could kill the momentum of emerging startups and hinder entrepreneurial activity in the AI sector.
EVIDENCE
He states that while framing regulation, “we also should not stifle growth and innovation because that’s the biggest death knell that we can sound towards a lot of entrepreneurship that’s emerging” [215-216].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for a balanced regulatory approach that does not choke innovation is discussed in sources on over-regulation and the importance of policy-innovation equilibrium [S32] [S33].
MAJOR DISCUSSION POINT
Balancing regulatory safeguards with innovation incentives
Argument 3
Open data serves as a catalyst for evidence‑based policymaking, targeted welfare delivery and capital formation
EXPLANATION
Raj Kumar contends that high‑quality public data enables governments to design more effective policies, monitor implementation, and attract investment by providing reliable information for decision‑making and market confidence.
EVIDENCE
He notes that “high-quality public data improves evidence-based policymaking, targeted welfare delivery, and even capital formation” and that without a statutory framework participation is uneven, causing investor nervousness and developer frustration [21-23].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open data’s role in driving evidence-based policy, democratic accountability and economic growth is highlighted as a transformative factor in multiple sources [S4] [S40].
MAJOR DISCUSSION POINT
Economic and policy benefits of open data
Argument 4
Open data initiatives must be underpinned by robust technical architecture and standards to ensure safe, interoperable, and AI‑ready data sharing.
EXPLANATION
Raj Kumar stresses that legal mandates alone are insufficient; effective open data requires secure environments, strong anonymisation protocols, synthetic‑data generation, and interoperable standards so that AI systems can reliably consume public datasets while protecting privacy.
EVIDENCE
He outlines that the panel discussion covered architecture, secure environments, anonymisation protocols, synthetic data, and interoperable standards as essential components of an open-data framework, indicating the technical depth needed for trustworthy AI applications [24-25].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Technical standards, interoperable architectures and secure environments are identified as essential for trustworthy AI-ready open data [S36] [S37].
MAJOR DISCUSSION POINT
Need for technical safeguards and standards in open data governance
Argument 5
Open data regulation could usher a new administrative era, reshaping governance structures.
EXPLANATION
Raj Kumar suggests that moving from voluntary to statutory open‑data mandates may fundamentally change how administrations operate, leading to a new era of governance.
EVIDENCE
During the panel he remarks, “We may be entering a new administrative era,” indicating that the adoption of a regulatory framework for open data could transform administrative processes and norms [130-133].
MAJOR DISCUSSION POINT
Impact of open data regulation on administrative structures
Argument 6
Open data is the raw material for AI‑driven digital economy; without a regulatory framework the government effectively locks valuable data, stifling innovation.
EXPLANATION
Raj Kumar argues that data generated by government agencies is the essential input for AI applications and the broader digital economy, and that failing to regulate its sharing is akin to building a digital economy while keeping the data warehouse sealed.
EVIDENCE
He states that “if AI is the future, the data is raw material… refusing to regulate sharing properly is like building a digital economy and locking the warehouse” [32-34].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Data is described as the ‘new oil’ and a critical input for AI-driven economies, underscoring the need for regulated sharing to unlock value [S40] [S4].
MAJOR DISCUSSION POINT
Need for statutory regulatory framework for open data
S
Shashi Tharoor
17 arguments152 words per minute2743 words1081 seconds
Argument 1
Voluntary approaches lead to uneven sharing; statutory mandates provide accountability (Shashi Tharoor)
EXPLANATION
Tharoor contends that relying on voluntary data sharing creates gaps in participation, whereas statutory mandates would impose accountability and ensure all ministries contribute uniformly.
EVIDENCE
He stresses that the decisive question is “who controls its use, who extracts its value, and who is left behind”, implying that without mandatory rules the landscape remains uneven and unaccountable [46-53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sources argue that statutory mandates are required to move from voluntary goodwill to institutional obligation and to ensure uniform participation [S4] [S31].
MAJOR DISCUSSION POINT
Need for a statutory regulatory framework for open data
Argument 2
Open data strengthens democratic accountability, welfare tracking and creates private ecosystems (Shashi Tharoor)
EXPLANATION
Tharoor highlights that open government data enables citizens and civil society to monitor public spending, assess welfare delivery, and foster private sector innovation.
EVIDENCE
He cites India’s open-government data platform being used to track welfare coverage and expose implementation leakages, and notes that such transparency “strengthens democratic accountability” [64-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open government data is linked to stronger democratic accountability, transparency and the emergence of private sector ecosystems in external analyses [S4] [S40].
MAJOR DISCUSSION POINT
Open data as public infrastructure that drives transparency and innovation
Argument 3
Historical examples (weather data, COVID dashboards) show transformative impact of open data (Shashi Tharoor)
EXPLANATION
Tharoor points to past instances where releasing public datasets spurred commercial ecosystems and rapid crisis response, illustrating the power of open data.
EVIDENCE
He refers to the United States releasing meteorological data, which seeded private ecosystems in weather forecasting, logistics, insurance and risk assessment [68-71], and to openly shared health data during the COVID-19 pandemic that enabled faster responses and better coordination [73-74].
MAJOR DISCUSSION POINT
Open data as public infrastructure that drives transparency and innovation
Argument 4
Unstructured openness can become tokenism, create privacy breaches and enable asymmetrical extraction (Shashi Tharoor)
EXPLANATION
Tharoor warns that releasing data without clear safeguards can reduce open‑data initiatives to symbolic gestures, exposing personal information and allowing richer actors to extract value disproportionately.
EVIDENCE
He states that “Open data, poorly structured, can generate new vulnerabilities, even as it promises transparency” and that without safeguards openness may devolve into tokenism and asymmetrical extraction [77-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The risk of tokenistic open-data initiatives that generate privacy vulnerabilities and asymmetric value extraction is highlighted as a concern in the literature [S4] [S37].
MAJOR DISCUSSION POINT
Risks, privacy concerns and the need for safeguards
Argument 5
Strong anonymization, informed consent and grievance mechanisms are essential (Shashi Tharoor)
EXPLANATION
Tharoor argues that any open‑data regime must embed robust privacy protections, clear consent procedures, and accessible redress to protect individuals.
EVIDENCE
He outlines the need for “strong anonymization and privacy protections”, “principle of consent and control”, and “grievance mechanisms” to ensure transparency does not compromise rights [88-95].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Robust anonymisation, consent procedures and grievance mechanisms are recommended as core safeguards for open-data programmes [S37].
MAJOR DISCUSSION POINT
Risks, privacy concerns and the need for safeguards
Argument 6
Domestic digital infrastructure and skill development are required to avoid data capitulation to foreign firms (Shashi Tharoor)
EXPLANATION
Tharoor stresses that without building local capacity and infrastructure, open data will primarily benefit foreign tech giants, undermining sovereignty.
EVIDENCE
He links data sovereignty to capacity, noting that “the issue is not cross-border data flows per se” but whether openness is “reciprocal and capacity enhancing”; he calls for investment in domestic digital infrastructure and skill development [96-104].
MAJOR DISCUSSION POINT
Capacity building, standards and AI‑ready data
Argument 7
Global commitments (G20, UN Digital Compact) guide but India must craft sovereign data policies (Shashi Tharoor)
EXPLANATION
Tharoor observes that multilateral agreements set a direction for data governance, yet India must translate these into sovereign, domestically‑aligned policies.
EVIDENCE
He references the G20 New Delhi Leaders Declaration (2023) and the UN Global Digital Compact, noting they emphasise data for development, trust, security and domestic capacity building, and that India must ensure data “supports development, not undermines regulatory accountability” [106-112].
MAJOR DISCUSSION POINT
Geopolitical and strategic considerations
Argument 8
Effective AI deployment in agriculture and health must consider real‑world accessibility and affordability (Shashi Tharoor)
EXPLANATION
Tharoor cautions that AI solutions must be grounded in the material realities of farmers and patients, otherwise they risk being ineffective.
EVIDENCE
He notes that despite budget provisions for AI in agriculture, many farmers lack tractors, electricity and water, questioning how AI will reach them; similarly, he raises concerns about health-data anonymisation and practical deployment [389-390][398-402].
MAJOR DISCUSSION POINT
Practical challenges for end‑users and sectoral applications
Argument 9
India’s digital public infrastructure can serve as a scalable model for other developing countries
EXPLANATION
Tharoor highlights that platforms such as Aadhaar, UPI, DigiLocker, and IndiaStack have been offered as templates for other nations seeking affordable digital solutions.
EVIDENCE
He notes that “India’s experience with IndiaStack illustrates what this participation can look like… offered as a template for other developing countries” [119-122].
MAJOR DISCUSSION POINT
Exportability of India’s digital public goods
Argument 10
Artificial intelligence has become the operating system of modern society, making robust data governance indispensable
EXPLANATION
Tharoor argues that AI now underpins markets, governance and personal choices, so the rules governing data access, processing and control are central to ensuring that AI serves public interests rather than entrenching inequality.
EVIDENCE
He observes that “When artificial intelligence is no longer a distant frontier of innovation, it is rapidly becoming the operating system of our modern society” and adds that while data is often called the new oil, the real constraint is the power to process it, highlighting the need for governance of both data and AI [42-46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Data is portrayed as a strategic asset akin to oil, and AI’s pervasiveness makes strong data governance essential for inclusive development [S40] [S4].
MAJOR DISCUSSION POINT
Pervasiveness of AI and the need for data governance
Argument 11
The primary bottleneck in the AI age is processing capacity, not data volume, highlighting the need for investment in compute infrastructure.
EXPLANATION
Tharoor argues that merely having large datasets is insufficient; the ability to process those datasets determines AI progress, so capacity‑building in computing resources is essential.
EVIDENCE
He observes that “the real constraint of the AI age is not the volume of data, but the power to process it,” and that this insight “punctures a convenient myth” about data abundance being the sole driver of AI advancement [46-48].
MAJOR DISCUSSION POINT
Capacity building and infrastructure constraints in AI
Argument 12
Trade agreements must avoid creating digital dependency or “virtual vassalage” and instead safeguard digital sovereignty.
EXPLANATION
Tharoor warns that without careful design, international trade deals can lock India into a subordinate digital position, making the country reliant on foreign platforms and limiting its ability to reap the benefits of its own data. He calls for trade policies that protect domestic digital autonomy while still enabling cross‑border cooperation.
EVIDENCE
He states that “Our trade agreements must not promote digital dependency or virtual vassalage… This dynamic is increasingly playing out in real policy debate” and cites examples of how data localisation and source-code disclosure clauses can narrow policy space for developing economies [78-84].
MAJOR DISCUSSION POINT
Geopolitical and strategic considerations
Argument 13
IndiaStack serves as a scalable, exportable model of digital public infrastructure for other developing nations.
EXPLANATION
Tharoor highlights that the suite of interoperable services—Aadhaar, UPI, DigiLocker—has demonstrated how a public‑digital backbone can drive inclusive innovation and can be offered as a template for other countries seeking affordable digital solutions.
EVIDENCE
He notes that “India’s experience with IndiaStack illustrates what this participation can look like… offered as a template for other developing countries” showing how the platform has been positioned as a developmental public good [119-122].
MAJOR DISCUSSION POINT
Exportability of India’s digital public goods
Argument 14
The emerging consensus favours structured openness rather than digital isolation or unrestricted data flows.
EXPLANATION
Tharoor argues that the future of data governance lies in a balanced approach that combines openness with safeguards, allowing innovation and cooperation to thrive while preserving national sovereignty and institutional strength.
EVIDENCE
He observes that “the emerging consensus is not about unrestricted flows or digital isolation, but about structured openness where innovation and cooperation coexist with sovereignty and institutional strength” [111-112].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A balanced, structured openness that combines innovation with safeguards is advocated as the emerging consensus in policy discussions [S33] [S32].
MAJOR DISCUSSION POINT
Balanced approach to data openness
Argument 15
The G20 New Delhi Leaders Declaration (2023) places digital public infrastructure at the centre of inclusive growth, guiding India’s data policy toward development‑oriented governance.
EXPLANATION
Tharoor notes that the G20 declaration emphasises data for development, trust, security and domestic capacity building, signalling that India should align its open‑data framework with these principles.
EVIDENCE
He references the G20 New Delhi Leaders Declaration in 2023, which highlighted digital public infrastructure as central to inclusive growth and linked data governance with trust, security and domestic capacity building [107-108].
MAJOR DISCUSSION POINT
International commitments shaping national data policy
Argument 16
The United Nations Global Digital Compact calls for safe, transparent and trustworthy data governance while respecting national regulatory frameworks, providing a multilateral blueprint for India’s open‑data strategy.
EXPLANATION
Tharoor points out that the Global Digital Compact urges stronger digital capacity in developing countries and stresses cooperation that respects each nation’s regulatory space, which can inform India’s approach to open data.
EVIDENCE
He cites the Global Digital Compact’s call for safe and transparent trustworthy data governance, stronger digital capacity in developing countries, and international cooperation that respects national regulatory frameworks [110-111].
MAJOR DISCUSSION POINT
Geopolitical and strategic considerations in data governance
Argument 17
Digital trade agreements can create digital dependency and ‘virtual vassalage’, so India must embed safeguards to protect its digital sovereignty.
EXPLANATION
Tharoor warns that one‑sided concessions on digital taxation and trade can lock India into a subordinate position, urging policy design that avoids dependence on foreign platforms.
EVIDENCE
He states that trade agreements must not promote digital dependency or virtual vassalage, noting examples where Indonesia and Malaysia have succumbed to such clauses, and that data localisation and source-code disclosure can narrow policy space for developing economies [78-84].
MAJOR DISCUSSION POINT
Geopolitical and strategic considerations
A
Arun Prabhu
4 arguments135 words per minute376 words166 seconds
Argument 1
Absence of clear legal architecture prevents a sustainable open‑data ecosystem (Arun Prabhu)
EXPLANATION
Prabhu argues that without explicit legal standards for anonymisation, data interchange and purpose, any open‑data initiative remains fragile and vulnerable to future legal challenges.
EVIDENCE
He points out that India lacks a “clear identified anonymisation standard, clear identified public data interchange standards” and a recognised purpose for processing open public data, which makes large-scale projects like LLMs exposed to judicial, executive and legislative storms [256-262].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The lack of explicit legal standards for anonymisation, data interchange and purpose is cited as a barrier to a durable open-data ecosystem, echoing calls for statutory frameworks [S4] [S31].
MAJOR DISCUSSION POINT
Need for a statutory regulatory framework for open data
Argument 2
A clearly defined public purpose for open‑data processing is essential to align initiatives with constitutional principles
EXPLANATION
Prabhu argues that without an explicitly recognised purpose for using open public data, projects risk legal challenges and may not serve the public good.
EVIDENCE
He points out that “We do not have a clear recognised purpose for the processing of open public data sets for public good and public improvement” [260-261].
MAJOR DISCUSSION POINT
Need for purpose‑driven open‑data frameworks
Argument 3
Legal uncertainty surrounding open‑data initiatives deters innovators because current practices could become illegal under future legislation.
EXPLANATION
Prabhu warns that without a clear, stable legal framework, today’s open‑data projects risk being deemed unlawful later, discouraging investment and innovation.
EVIDENCE
He explains that a government official who creates an open-data repository may find his action “frowned upon… downright illegal” years later, and founders risk their businesses becoming “fundamentally unviable” due to shifting legal climates [258-262].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Over-regulation and legal uncertainty are warned to stifle innovation, reinforcing the need for balanced, predictable rules [S32] [S33].
MAJOR DISCUSSION POINT
Legal uncertainty hampers sustainable open‑data ecosystem
Argument 4
A sustainable open‑data ecosystem requires a legal framework that aligns with constitutional principles, notably the Puttaswamy judgment.
EXPLANATION
Prabhu stresses that any durable open‑data regime must be rooted in the Constitution’s guarantees of privacy and personal liberty, as articulated in the Puttaswamy decision, to ensure that data initiatives are both legally sound and consistent with fundamental rights.
EVIDENCE
He points out that “absent these 4 key important elements… which work coherently… with the constitutional principles which have been laid out in the Puttaswamy judgment” a legal architecture is needed to avoid future judicial or legislative challenges [258-262].
MAJOR DISCUSSION POINT
Need for a constitutionally anchored legal architecture for open data
C
Cyril Shroff
8 arguments174 words per minute889 words305 seconds
Argument 1
Regulatory clarity is prerequisite for innovation and market growth (Cyril Shroff)
EXPLANATION
Shroff maintains that clear, enforceable regulations create the foundation for AI‑ready data, which in turn fuels innovation and economic expansion.
EVIDENCE
He states that if data were “systematically available in a usable format, AI-ready format, that would actually spark a lot of innovation” and links this to the need for regulatory clarity [201-204].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Regulatory clarity is identified as a catalyst for innovation and market development, mirroring the experience of capital-markets where clear rules attract investment [S4] [S31].
MAJOR DISCUSSION POINT
Need for a statutory regulatory framework for open data
Argument 2
Transparent, reliable data boosts investor confidence, market efficiency and long‑term growth (Cyril Shroff)
EXPLANATION
Shroff draws an analogy between capital‑markets regulation and data regulation, arguing that trust created by transparent, enforceable rules attracts investment and sustains market development.
EVIDENCE
He cites India’s capital-markets growth-25 % of global IPOs-being driven by “regulatory clarity”, “right enforcement”, and uniform standards, and suggests the same logic applies to data and the digital world [268-279].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Transparent, enforceable data rules are linked to investor confidence and sustained economic growth in external analyses of digital economies [S31] [S4].
MAJOR DISCUSSION POINT
Economic and investment implications of reliable public data
Argument 3
Data as strategic asset can attract data‑center investment and shift India from a services‑based to a product‑based tech sector (Cyril Shroff)
EXPLANATION
Shroff argues that a trustworthy data regime will encourage data‑center construction and enable India to move up the value chain from services to product‑centric technology.
EVIDENCE
In the same passage about capital-markets he notes that trust in transparent information and a reliable legal system are needed for “multibillion-dollar investments” and for India to transition to a product-based tech sector [268-279].
MAJOR DISCUSSION POINT
Economic and investment implications of reliable public data
Argument 4
“Who watches the watchers?” – courts and the rule of law provide ultimate oversight (Cyril Shroff)
EXPLANATION
Shroff asserts that India’s constitutional courts and the rule of law are the final safeguard over AI and data governance, ensuring accountability beyond industry self‑regulation.
EVIDENCE
He explains that “the answer lies in our constitution… the courts and the rule of law” as the ultimate oversight mechanism, contrasting it with perceived weaknesses in other jurisdictions [381-387].
MAJOR DISCUSSION POINT
Governance, oversight and accountability of AI systems
Argument 5
India’s constitutional courts provide a robust, ultimate oversight mechanism for AI and data governance
EXPLANATION
Shroff contends that the judiciary, grounded in the constitution, serves as the final safeguard over AI systems, compensating for any regulatory gaps.
EVIDENCE
He explains that “the answer lies in our constitution… the courts and the rule of law” serve as the ultimate oversight [384-386].
MAJOR DISCUSSION POINT
Judicial oversight as a pillar of AI governance
Argument 6
Uniform regulatory language and standards across data governance are essential for building trust, analogous to capital‑markets regulation.
EXPLANATION
Shroff emphasizes that consistent terminology and standards in data regulation foster confidence among investors and innovators, just as uniformity helped India’s capital‑markets mature.
EVIDENCE
He states that “just uniformity in regulatory language that is used” is key, drawing a parallel between the regulatory clarity that propelled capital-markets growth and the need for similar clarity in data governance [276-278].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Standardised data formats, metadata and interoperable APIs are highlighted as crucial for trust and effective data sharing [S36] [S37].
MAJOR DISCUSSION POINT
Uniform regulatory language as trust builder
Argument 7
India should craft its own data regulation rather than copying Western models
EXPLANATION
Shroff’s brief one‑liner “Not copying the West” signals his belief that India must develop an indigenous regulatory framework for open data and AI that reflects its unique legal, economic and societal context, instead of simply adopting foreign rules.
EVIDENCE
During the closing segment, when asked for a one-liner about India’s future, Shroff responded succinctly, “Not copying the West.” This statement was made immediately after a request for a concise vision, indicating his stance on independent regulatory design. [318]
MAJOR DISCUSSION POINT
Geopolitical and strategic considerations
Argument 8
Ethics should complement judicial oversight as a self‑regulatory layer for AI and data governance
EXPLANATION
Shroff argues that while India’s courts provide the ultimate legal safeguard, a parallel ethical framework is needed to guide AI developers and data practitioners, offering a more flexible, industry‑driven form of oversight.
EVIDENCE
In his response to the question “who is going to watch the watchers?”, Shroff emphasized that “the answer lies in our constitution… the courts and the rule of law” and added that “ethics… may be more ambiguous because ethics conversations always are more amorphous but it is something which the industry will have to evolve for itself.” This reflects his view that ethics serves as an additional oversight mechanism alongside the judiciary. [381-387]
MAJOR DISCUSSION POINT
Governance, oversight and accountability of AI systems
R
Rama Vedashree
8 arguments158 words per minute1267 words480 seconds
Argument 1
India’s data.gov.in platform was built for research but now needs AI‑ready standards, APIs and metadata (Rama Vedashree)
EXPLANATION
Vedashree explains that the original open‑data initiative focused on static CSV/PDF releases for research, but today AI demands continuous, API‑driven, metadata‑rich datasets.
EVIDENCE
She recounts the origin of the open-data movement and the launch of data.gov.in, then stresses that “now you need open data that is AI-ready, continuously available, with metadata and standards for interoperability” [149-163].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift from static datasets to AI-ready, API-driven, metadata-rich data requires technical standards and interoperable architectures, as discussed in the literature [S36] [S37].
MAJOR DISCUSSION POINT
Open data as public infrastructure that drives transparency and innovation
Argument 2
AI‑ready open data must be continuously available, interoperable and consumable via APIs (Rama Vedashree)
EXPLANATION
She argues that modern users expect real‑time API access rather than downloadable files, and that AI systems require interoperable, standardized data streams.
EVIDENCE
She notes that “nobody wants to download and do something offline”; instead data should be consumable through APIs and apps for both end-users and AI systems [162-163].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Continuous, API-based, interoperable data streams are identified as essential for AI consumption and innovation [S36] [S37].
MAJOR DISCUSSION POINT
Open data as public infrastructure that drives transparency and innovation
Argument 3
Sector‑specific data (health, agriculture) must be opened securely and with proper anonymization (Rama Vedashree)
EXPLANATION
Vedashree stresses that sensitive domains require robust anonymisation and security protocols before data can be shared for innovation.
EVIDENCE
She references institutional data locked in nodal agencies, the need for “secure, anonymized” opening of health and agricultural datasets, and the importance of sector-specific safeguards [158-169].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sector-specific safeguards, secure environments and strong anonymisation protocols are recommended for sensitive datasets [S37] [S36].
MAJOR DISCUSSION POINT
Risks, privacy concerns and the need for safeguards
Argument 4
Interoperable standards, metadata and API access are critical for AI‑ready datasets (Rama Vedashree)
EXPLANATION
She highlights that without common standards and rich metadata, AI developers cannot efficiently utilise public data.
EVIDENCE
She reiterates that “interoperable standards” and “metadata” are essential for AI-ready data, and that APIs are the preferred consumption method [158-163].
MAJOR DISCUSSION POINT
Capacity building, standards and AI‑ready data
Argument 5
Sector‑specific data sharing policies with consent and accountability are needed for health, finance, etc. (Rama Vedashree)
EXPLANATION
Vedashree calls for tailored data‑access frameworks that embed consent, accountability and sector‑appropriate safeguards.
EVIDENCE
She discusses the supply-demand gap, mentions the UK Payment Systems Directive and EU’s FIDA as examples of sector-level data-access policies, and stresses the need to map data needs to users [236-244].
MAJOR DISCUSSION POINT
Governance, oversight and accountability of AI systems
Argument 6
Sensitive health data (e.g., men’s mental health) cannot be fully open; progressive regulation and patient‑controlled sharing are required (Rama Vedashree)
EXPLANATION
She argues that personally identifiable health data should remain protected, with optional patient‑driven sharing for research under strict anonymisation.
EVIDENCE
She notes that “personally identifiable data will never be opened up” and cites the German healthcare act allowing patients to consent to share anonymised data for research [355-361].
MAJOR DISCUSSION POINT
Practical challenges for end‑users and sectoral applications
Argument 7
Open‑data architecture should be federated rather than centralized, allowing sector‑specific repositories coordinated by relevant regulators
EXPLANATION
Vedashree argues that a single monolithic data portal limits flexibility and that a federated model better serves diverse sectoral needs.
EVIDENCE
She states “I also believe that we cannot have one centralized open data repository. Data needs to be federated” [312-314].
MAJOR DISCUSSION POINT
Designing a federated open‑data ecosystem
Argument 8
A large portion of institutional data remains hidden (‘dark data’), requiring proactive policies to uncover and open it for innovation.
EXPLANATION
Vedashree points out that many datasets are locked within agencies and not publicly accessible, limiting their utility unless deliberate efforts are made to expose them.
EVIDENCE
She describes “a lot of institutional data which is getting locked and siloed… I would like to call it daft data because nobody is using them,” and notes that sectors such as cybersecurity and fintech need this data for innovation [164-166].
MAJOR DISCUSSION POINT
Hidden institutional data (‘dark data’) limits innovation
A
Asha Jadeja Motwani
4 arguments176 words per minute430 words146 seconds
Argument 1
Dependence on foreign technology stacks raises strategic vulnerability; joint regulatory frameworks are needed (Asha Jadeja Motwani)
EXPLANATION
Motwani warns that India’s reliance on the American tech stack creates a strategic risk, and advocates for a joint Indo‑US regulatory framework to mitigate it.
EVIDENCE
She explains that India is built on an American stack and that a joint regulatory framework would prevent conflicts, urging a conscious decision to align with the US while tying down regulatory safeguards [327-339].
MAJOR DISCUSSION POINT
Geopolitical and strategic considerations
Argument 2
Heavy reliance on the US tech stack creates strategic risk; a joint Indo‑US regulatory approach is advisable (Asha Jadeja Motwani)
EXPLANATION
Reiterating the earlier point, she emphasizes the need for a coordinated Indo‑US policy to avoid being vulnerable if the US were to restrict API access.
EVIDENCE
She raises the hypothetical scenario of the US becoming hostile and pulling APIs, arguing that India must decide consciously to use the American stack and then craft joint regulations [327-339].
MAJOR DISCUSSION POINT
Geopolitical and strategic considerations
Argument 3
Strategic alignment with the US tech stack requires a joint Indo‑US regulatory framework that secures reciprocal benefits and mitigates geopolitical risk
EXPLANATION
Motwani stresses that if India consciously adopts the American stack, it must be backed by coordinated regulations to avoid conflicts and protect national interests.
EVIDENCE
She says “we need a joint regulatory framework so that we are never conflicting with them… we have consciously chosen to be on the American stack” [327-339].
MAJOR DISCUSSION POINT
Geopolitical safeguards for technology dependence
Argument 4
Open health data is essential for global research collaborations and must be secured through reciprocal Indo‑US regulatory frameworks
EXPLANATION
Motwani stresses that making Indian health data openly available enables Western researchers to develop cures that benefit India, but this openness must be balanced with joint regulatory safeguards to protect national interests and ensure mutual benefit.
EVIDENCE
She points out that “we need to make sure that our health data is open and accessible to those in the West who are developing these programs… it is critical to know that it’s a fine balance” and calls for a joint regulatory framework to avoid conflicts when using the American technology stack [329-334].
MAJOR DISCUSSION POINT
Strategic importance of health data openness and Indo‑US regulatory alignment
I
Irina Ghose
6 arguments171 words per minute680 words237 seconds
Argument 1
Anthropic’s Model‑Context Protocol (MCP) provides contextual Indian language data to build trust (Irina Ghose)
EXPLANATION
Ghose describes Anthropic’s 2024 Model‑Context Protocol, which supplies Indian‑language, domain‑specific data to AI models, fostering trust and relevance for Indian users.
EVIDENCE
She details that MCP was created in 2024, released to the Linux community, and supplies contextual data for Indian languages and sectors such as agriculture and health [177-190].
MAJOR DISCUSSION POINT
Capacity building, standards and AI‑ready data
Argument 2
Anthropic’s Model‑Context Protocol creates a feedback loop where usage metrics inform continuous data provision, ensuring models stay relevant to Indian contexts
EXPLANATION
Ghose describes how Anthropic collects an economic impact survey to understand how Indian users employ its tools and then tailors data releases accordingly.
EVIDENCE
She notes “we are doing an economic impact survey index… we share it completely contextually as to what people are using it for” [180-182].
MAJOR DISCUSSION POINT
Data‑driven model adaptation for local relevance
Argument 3
Trust in AI systems is built through contextual, open data and inclusive contribution from all stakeholders
EXPLANATION
Ghose argues that for AI to be trusted in India, data must be contextual to local languages and domains, openly shared, and the ecosystem must involve contributions from government, industry and civil society, creating a transparent trust‑first environment.
EVIDENCE
She says “the thread of trust needs to be woven by the contextual data in the context of India and ensuring that we are making it both open, accessible and ensuring that everybody is contributing to that grid” and likens the need for a universal connector to ensure seamless data flow across sectors [321-324].
MAJOR DISCUSSION POINT
Building trust through contextual open data
Argument 4
Adopting standardized data connectors (akin to a universal charger) is crucial for seamless integration of diverse data sources across sectors.
EXPLANATION
Ghose argues that without common interfaces, integrating varied datasets becomes cumbersome, and a universal protocol would simplify data consumption for AI applications.
EVIDENCE
She uses the analogy “when you had a mobile phone world you did not want to have a charger for different mobiles… the universal connector came across that solved all the problems,” illustrating the need for a model-context protocol as a standard connector for data [186-190].
MAJOR DISCUSSION POINT
Standardized data connectors facilitate AI integration
Argument 5
Anthropic conducts an economic impact survey to tailor data provision to Indian user needs, building trust and relevance.
EXPLANATION
Ghose explains that Anthropic runs an economic impact survey index to understand how Indian users employ its tools, and then shares data and insights contextualised to those use cases, thereby fostering trust in AI systems.
EVIDENCE
She states that Anthropic is carrying out an economic impact survey index to ensure data is made available for the way people are using it in India, and that they share this information completely contextually about user behavior [180-182].
MAJOR DISCUSSION POINT
Capacity development and trust‑building in AI deployment
Argument 6
Anthropic commits to building AI solutions across key Indian sectors such as agriculture, health and education to ensure the AI moment for India.
EXPLANATION
Ghose says the company is working with partners to make data transparently available for sectors that matter most to India, aiming to create sector‑specific AI applications that address local challenges.
EVIDENCE
She mentions collaborations with Google, Anthropic, Microsoft and others to provide data for agriculture, health and education, and that they are ensuring data is made available for those sectors to make India’s AI moment a reality [190-191].
MAJOR DISCUSSION POINT
Social and economic development through sectoral AI
D
Dr. Sasmit Patra
3 arguments176 words per minute830 words281 seconds
Argument 1
Soft‑touch regulation balances national security, public good and commercial use (Dr. Sasmit Patra)
EXPLANATION
Patra proposes a nuanced regulatory approach that is not overly restrictive but still safeguards security, public interest, and commercial innovation.
EVIDENCE
He references ongoing discussions about a “soft-touch regulation” framework that would manage data sharing without imposing a hard EU-style AI Act [288-298].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A nuanced, soft-touch regulatory approach that avoids stifling innovation while protecting security and public interest is advocated in policy discussions on over-regulation [S32] [S33].
MAJOR DISCUSSION POINT
Geopolitical and strategic considerations
Argument 2
Data should be classified into three tiers—public‑good, national‑security‑sensitive, and commercially monetizable—to enable differentiated regulatory treatment
EXPLANATION
Patra proposes a tiered approach that treats data differently based on its societal, security, and commercial value, allowing nuanced policy responses.
EVIDENCE
He outlines “the second data is restricted and probably national security. And the third data is something that can be monetized and commercially useful” [291-293].
MAJOR DISCUSSION POINT
Tiered data categorisation for policy design
Argument 3
Parliamentary oversight committees can guide the development of soft‑touch data regulation, ensuring alignment with national priorities.
EXPLANATION
Patra highlights the role of legislative bodies, such as the Parliamentary Oversight Committee on Communications and IT, in shaping balanced, flexible regulatory frameworks rather than adopting a rigid EU‑style AI Act.
EVIDENCE
He notes his membership on the Parliamentary Oversight Committee and references ongoing discussions about a “soft-touch regulation” framework as an alternative to a hard EU AI Act [294-298].
MAJOR DISCUSSION POINT
Parliamentary oversight in shaping soft‑touch regulation
A
Audience Member 1
1 argument160 words per minute60 words22 seconds
Argument 1
Auditors and regulators themselves depend on AI; independent assurance mechanisms are required (Audience Member 1)
EXPLANATION
The audience member questions who will oversee AI‑enabled auditors and regulators, highlighting the need for independent assurance structures.
EVIDENCE
He asks directly, “who would be watching the watchers?” indicating concern over oversight of AI-dependent oversight bodies [344].
MAJOR DISCUSSION POINT
Governance, oversight and accountability of AI systems
B
BK Patnaik
7 arguments0 words per minute0 words1 seconds
Argument 1
AI benefits for farmers are limited by on‑ground constraints (BK Patnaik – audience question, reflected by Shashi Tharoor)
EXPLANATION
Patnaik points out that without basic infrastructure—such as tractors, electricity, and water—AI solutions cannot reach or benefit Indian farmers effectively.
EVIDENCE
He raises the question about AI in agriculture, and Tharoor later acknowledges that many farmers lack essential equipment and resources, questioning AI’s practical impact [346][389-390].
MAJOR DISCUSSION POINT
Practical challenges for end‑users and sectoral applications
Argument 2
AI solutions for agriculture will only be effective if basic infrastructure—such as tractors, reliable electricity, water, and internet connectivity—is provided to farmers
EXPLANATION
Patnaik questions the practical impact of AI on farming given the lack of essential equipment and services in many rural areas.
EVIDENCE
He asks about AI benefits for farmers, and Tharoor later acknowledges that “many farmers lack tractors, electricity and water” limiting AI reach [346][389-390].
MAJOR DISCUSSION POINT
Infrastructure prerequisites for AI‑driven agricultural transformation
Argument 3
AI‑driven agricultural benefits are constrained by on‑ground infrastructure deficits
EXPLANATION
Patnaik argues that without basic assets such as tractors, reliable electricity, water and internet connectivity, AI solutions cannot reach Indian farmers, limiting the practical impact of data‑driven interventions.
EVIDENCE
He asks whether AI will be successful for farmers given the lack of essential equipment, and Tharoor later acknowledges that many farmers lack tractors, electricity and water, questioning AI’s reach [346][389-390].
MAJOR DISCUSSION POINT
Practical challenges for AI adoption in agriculture
Argument 4
AI‑driven agricultural benefits are constrained by on‑ground infrastructure deficits such as lack of tractors, reliable electricity, water, and internet connectivity.
EXPLANATION
Patnaik questions whether AI can meaningfully improve farmers’ livelihoods when basic agricultural inputs and utilities are missing, suggesting that without these fundamentals, AI solutions will have limited impact.
EVIDENCE
In his audience question he asks whether AI will be successful for farmers given the absence of tractors, electricity, and water, highlighting these constraints [346]; Shashi Tharoor later acknowledges these limitations, noting many farmers lack tractors, electricity and water, which hampers AI reach [389-390].
MAJOR DISCUSSION POINT
Practical challenges for end‑users and sectoral applications
Argument 5
Effective AI adoption in agriculture requires simultaneous investment in basic rural infrastructure to ensure that digital tools can be accessed and utilized by farmers.
EXPLANATION
Patnaik implies that policy must address foundational needs—such as tractors, power, water, and connectivity—before deploying AI, emphasizing that technology alone cannot bridge the gap without supporting physical resources.
EVIDENCE
His question points to the need for tractors, electricity, water, and 24-hour power for farmers, indicating that AI’s potential is limited without such infrastructure [346]; Tharoor’s later comment reinforces this point by noting many farmers lack these essentials [389-390].
MAJOR DISCUSSION POINT
Infrastructure prerequisites for AI‑driven agricultural transformation
Argument 6
AI‑driven agricultural benefits are constrained by on‑ground infrastructure deficits such as lack of tractors, reliable electricity, water and internet connectivity.
EXPLANATION
Patnaik highlights that without basic physical assets and services, AI solutions cannot be effectively deployed to improve farmers’ livelihoods, implying that policy must address these foundational gaps before expecting AI impact.
EVIDENCE
In his audience question he asks whether AI will be successful for farmers given the absence of tractors, electricity and water [346]; Shashi Tharoor later acknowledges that many farmers lack these essentials, limiting AI reach [389-390].
MAJOR DISCUSSION POINT
Practical challenges for end‑users and sectoral applications
Argument 7
Effective AI‑driven agricultural benefits require simultaneous investment in rural infrastructure such as tractors, reliable electricity, water and broadband connectivity.
EXPLANATION
Patnaik questions whether AI can improve farmers’ livelihoods when basic physical assets are missing, implying that policy must address these foundational gaps alongside AI deployment.
EVIDENCE
He asks if AI will be successful for farmers given the lack of tractors, electricity, water and 24-hour power, highlighting these constraints [346]; Tharoor later acknowledges that many farmers lack these essentials, limiting AI’s reach [389-390].
MAJOR DISCUSSION POINT
Infrastructure prerequisites for AI adoption in agriculture
A
Audience Member 3
2 arguments232 words per minute258 words66 seconds
Argument 1
Sensitive health data (e.g., men’s mental health) cannot be fully open; progressive regulation and patient‑controlled sharing are required (Rama Vedashree)
EXPLANATION
The audience member raises concerns about the difficulty of accessing precise health data for men, prompting a response that such personally identifiable data should remain protected, with optional patient consent for research.
EVIDENCE
The question highlights the challenge, and Vedashree replies that “personally identifiable data will never be opened up” and cites the German healthcare act allowing patients to voluntarily share anonymised data for research [355-361].
MAJOR DISCUSSION POINT
Practical challenges for end‑users and sectoral applications
Argument 2
Regulatory frameworks should permit sharing of aggregated, non‑sensitive datasets for research while maintaining strict privacy safeguards
EXPLANATION
The audience member asks how the government can enable the release of non‑sensitive data for research without compromising privacy.
EVIDENCE
He asks “what is the regulatory framework that can be put into place so that this non-sensitive data can be shared for research?” [363-366].
MAJOR DISCUSSION POINT
Balancing data openness with privacy protection for research use
Agreements
Agreement Points
Similar Viewpoints
Unexpected Consensus
Differences
Different Viewpoints
Unexpected Differences
Takeaways
Key takeaways
A statutory, legally binding framework for open data is essential; voluntary approaches lead to uneven participation and investor uncertainty. Open data should be treated as public infrastructure that enhances transparency, democratic accountability, and fuels private sector ecosystems. For the AI era, data must be AI‑ready: interoperable standards, rich metadata, API‑based access, and continuous availability are required. Robust safeguards—strong anonymisation, informed consent, grievance mechanisms, and privacy protections—are mandatory to prevent tokenism and exploitation. Domestic capacity building (digital infrastructure, skills, and regulatory expertise) is critical to avoid data capitulation to foreign platforms. Reliable public data boosts investor confidence, market efficiency and can help shift India from a services‑based to a product‑based tech economy. Geopolitical reliance on foreign technology stacks (especially the US) poses strategic risks; joint regulatory approaches and soft‑touch regulation are needed. Governance and oversight must combine legal mechanisms (courts, rule of law) with ethical frameworks and independent assurance to “watch the watchers.” Sector‑specific data (health, agriculture, finance) requires tailored opening strategies, balancing accessibility with security and consent.
Resolutions and action items
Develop a comprehensive national data strategy that is federated rather than a single central repository (Rama Vedashree). Introduce clear statutory mandates for government bodies to share standardized, aggregated datasets, with tiered access models (free, paid, restricted). Adopt interoperable standards and API‑first delivery for AI‑ready data, including metadata requirements (Rama Vedashree). Implement consent‑based, revocable data sharing mechanisms and establish grievance redressal processes (Shashi Tharoor). Leverage the Model‑Context Protocol (MCP) developed by Anthropic to provide contextual Indian‑language data for AI models (Irina Ghose). Align India’s open‑data policies with international commitments such as the G20 New Delhi Leaders Declaration and the UN Global Digital Compact. Create sector‑specific data‑opening policies (e.g., health, agriculture, fintech) in coordination with relevant regulators (Rama Vedashree, Dr. Sasmit Patra). Explore a joint Indo‑US regulatory framework to mitigate strategic dependence on the US tech stack (Asha Jadeja Motwani). Strengthen domestic digital infrastructure and skill development programs to support data processing and AI capabilities.
Unresolved issues
Exact legislative design, timeline and enforcement mechanisms for the proposed statutory open‑data framework remain undefined. How to achieve large‑scale citizen trust and consent for sharing personal and sensitive data, especially in health and finance sectors. Concrete mechanisms for independent oversight of AI systems and “watching the watchers” beyond the general reference to courts and ethics. Funding models and institutional responsibilities for building and maintaining AI‑ready data platforms and capacity‑building initiatives. Strategies to ensure AI benefits reach smallholder farmers and underserved populations given infrastructure constraints (electricity, equipment). Resolution of geopolitical tensions related to dependence on US APIs and hardware, and the specifics of a joint regulatory approach. Details of how monetisation of open data will be regulated to ensure equitable return to India and its citizens.
Suggested compromises
Introduce “structured openness” with statutory teeth but maintain tiered access (free, paid, restricted) to balance openness and control. Adopt a soft‑touch regulatory model that protects national security and public interest while allowing commercial use (Dr. Sasmit Patra). Implement a federated open‑data architecture rather than a single centralized repository, allowing sectoral autonomy (Rama Vedashree). Combine voluntary incentives for data sharing with legal obligations to encourage participation without stifling innovation (dialogue between Prime Minister and Sir Humphrey). Pursue a joint Indo‑US regulatory framework that aligns standards while preserving India’s strategic autonomy (Asha Jadeja Motwani).
Thought Provoking Comments
The real constraint of the AI age is not the volume of data, but the power to process it. Abundance alone does not confer agency, and openness without capacity can entrench inequality as easily as it can enable progress.
Challenges the common mantra that “data is the new oil” and reframes the debate around compute power and capacity, shifting focus from sheer data quantity to who can actually use it.
Redirected the conversation from a purely supply‑side view of open data to a demand‑side perspective, prompting later speakers (e.g., Rama Vedashree, Irina Ghosh) to stress AI‑ready formats, standards, and capacity‑building.
Speaker: Shashi Tharoor
Open data is not just a technical tool; it is a statement of intent about how knowledge is shared, how power is distributed and how societies choose to govern the informational foundations of innovation.
Elevates open data from a bureaucratic exercise to a political and ethical issue, framing it as a matter of sovereignty and fairness.
Set the tone for the panel’s deeper exploration of data sovereignty, leading to Arun Prabhu’s call for a durable legal architecture and Asha Jadeja Motwani’s concerns about geopolitical dependencies.
Speaker: Shashi Tharoor
Open data must be tied to domestic capacity building. Data sovereignty has little meaning without adequate capacity; public data should strengthen local research institutions, startups, and digital infrastructure.
Links openness to tangible national capability, warning against a one‑way flow of raw data to foreign AI firms.
Prompted discussion on the need for AI‑ready data, APIs, and sector‑specific standards, which Rama Vedashree and Irina Ghosh later elaborated.
Speaker: Shashi Tharoor
We need open data that is AI‑ready: always available, with rich metadata, interoperable standards, and consumable via APIs—not just static CSVs or PDFs.
Identifies a concrete technical gap in India’s current open‑data ecosystem and connects it to the practical needs of modern AI development.
Shifted the dialogue from policy rhetoric to actionable technical requirements, influencing Irina Ghosh’s discussion of the Model Context Protocol (MCP) and Arun Prabhu’s call for standards.
Speaker: Rama Vedashree
Trust must be a verifiable outcome. We need contextual Indian data, a Model Context Protocol, and transparent sharing mechanisms so that AI models are built on data that reflects local languages, domains, and realities.
Introduces a concrete governance tool (MCP) and emphasizes the necessity of contextualization for trustworthy AI, moving beyond generic openness.
Provided a practical example of how private sector can contribute to the regulatory framework, reinforcing the earlier points about standards and capacity.
Speaker: Irina Ghosh
We lack four essential elements: a clear anonymisation standard, public data interchange standards, a recognised purpose for processing public data, and a legal architecture that aligns with constitutional principles. Without these, any open‑data initiative is legally vulnerable and unsustainable.
Synthesises the legal gaps into a clear checklist, highlighting why voluntary or fragmented policies have failed.
Served as a turning point that moved the conversation toward concrete legislative reforms, prompting Cyril Shroff’s analogy with capital‑market regulation and reinforcing the need for enforceable standards.
Speaker: Arun Prabhu
If you substitute the word ‘capital market’ with ‘data’, you see the same answer: trust comes from regulatory clarity, enforcement, and uniform standards. Without a enforceable legal framework, open data cannot generate investor confidence or economic growth.
Uses a familiar analogy to illustrate how data regulation mirrors successful financial market regulation, making the abstract concept of “trust” concrete.
Bridged the gap between legal theory and economic outcomes, steering the discussion toward the macro‑economic implications of open data and reinforcing the urgency for enforceable rules.
Speaker: Cyril Shroff
The political question is whether citizens are willing to share their data, even anonymised. Trust in government is the linchpin; without citizen buy‑in, any statutory mandate will flounder.
Highlights the often‑overlooked social dimension—public consent and trust—of data policy, reminding the panel that technical or legal solutions must be socially grounded.
Prompted audience questions about privacy and led to a broader debate on ethics, the role of courts, and the need for public education, influencing later remarks by Cyril Shroff and Shashi Tharoor.
Speaker: Sasmit Patra
We are built on an American stack; if we consciously choose that, we need a joint regulatory framework with the US to ensure we are not hostage to foreign APIs and that our data returns benefits to India.
Raises a geopolitical risk that ties technology dependence to sovereignty, expanding the discussion beyond domestic policy to international strategic considerations.
Shifted the conversation toward geopolitical strategy, prompting further remarks on soft‑touch regulation (Patra) and the need for diversified partnerships.
Speaker: Asha Jadeja Motwani
Who watches the watchers? In India, the answer lies in our constitution, the courts, and an emerging ethics code. The judiciary, despite its backlog, remains the ultimate guarantor of rule of law.
Directly addresses the audience’s concern about oversight of AI regulators, grounding the answer in institutional checks rather than abstract promises.
Closed the loop on governance concerns, reinforcing the earlier emphasis on legal enforceability and ethical standards, and setting the stage for Shashi Tharoor’s concluding remarks on practical implementation.
Speaker: Cyril Shroff
Even if we open 1,000 Indian health cases to the West, we must ensure that the resulting AI models are not proprietary and that the benefits flow back to Indian patients; otherwise open data becomes exploitation.
Connects the abstract debate on data sharing to a concrete equity issue, warning against a new form of extractive digital colonialism.
Re‑focused the panel on the need for benefit‑sharing clauses in any open‑data framework, influencing the final calls for “fairer digital order” and reinforcing Arun Prabhu’s legal‑architecture checklist.
Speaker: Shashi Tharoor
Overall Assessment

The discussion was driven forward by a series of pivotal insights that moved the panel from high‑level rhetoric to concrete, actionable concerns. Shashi Tharoor’s framing of data as power and the limitation of compute set the intellectual agenda, while Rama Vedashree and Irina Ghosh translated that into technical standards and trust mechanisms. Arun Prabhu’s legal checklist and Cyril Shroff’s capital‑market analogy provided a clear roadmap for enforceable regulation, and the geopolitical caution from Asha Jadeja Motwani broadened the scope to international dependencies. Throughout, the recurring theme of citizen trust—highlighted by Sasmit Patra—kept the conversation grounded in democratic legitimacy. Collectively, these comments reshaped the dialogue from abstract policy debate to a multidimensional roadmap encompassing technical standards, legal architecture, economic incentives, societal consent, and geopolitical strategy, ultimately steering the panel toward a consensus that meaningful open‑data regulation must be capacity‑building, enforceable, and equitable.

Follow-up Questions
How might we craft a regulatory framework for open data that matches our ambitions and addresses anxieties about AI?
Seeks a balanced regulation that enables innovation while protecting privacy and fairness.
Speaker: Shashi Tharoor
What were the origins, evolution, and challenges of India’s national data sharing and accessibility policy and the open government data platform?
Understanding the policy’s history is essential to identify gaps and improve future implementation.
Speaker: C. Raj Kumar (to Rama Vedashree)
Can open data sharing frameworks drive trust‑first innovation for AI developers like Anthropic, making models more secure and trustworthy in India?
Explores whether open data can enhance AI reliability and market confidence.
Speaker: C. Raj Kumar (to Irina Ghosh)
Is a clearer regulatory framework necessary to ensure consistent, effective, systemic data sharing by government bodies, and what role should it play in incentives, accountability, and initiatives?
Calls for institutional mechanisms to make government data sharing mandatory and reliable.
Speaker: C. Raj Kumar (to Cyril Shroff)
How can greater availability of reliable public data lead to stronger evidence‑based policymaking and more efficient delivery of public goods?
Links open data to improved policy design and targeted welfare outcomes.
Speaker: C. Raj Kumar (to Dr. Sasmit Patra)
Why have well‑intended policies like the India Data Accessibility and Use Policy and the National Data Governance Framework not progressed, lacking regulatory enforcement?
Seeks reasons behind policy‑implementation gaps to inform corrective action.
Speaker: C. Raj Kumar (to Rama Vedashree)
Which core principles and safeguards should shape a structured legal framework for open data sharing, including tiers of access and protection against misuse?
Aims to define the legal architecture needed for a sustainable open data ecosystem.
Speaker: C. Raj Kumar (to Arun Prabhu)
How can greater availability of reliable public data influence investor confidence, market efficiency, and long‑term economic growth?
Investigates the economic benefits of data transparency for attracting investment.
Speaker: C. Raj Kumar (to Cyril Shroff)
What geopolitical concerns arise when scaling up open data sharing in India, especially regarding opposition parties and international relations?
Highlights potential security and diplomatic risks of extensive data openness.
Speaker: C. Raj Kumar (to Dr. Sasmit Patra)
Who will monitor the regulators and auditors who themselves rely on AI (“watch the watchers”)?
Raises the need for oversight mechanisms for AI‑dependent oversight bodies.
Speaker: Audience Member 1
Will AI‑driven data interventions improve outcomes for Indian farmers, and how can success be measured?
Seeks evidence of AI impact on agriculture and metrics for evaluation.
Speaker: Audience Member 2 (to Dr. Sasmit Patra)
How can regulatory frameworks ensure precise, sensitive data (e.g., men’s mental health) is provided to appropriate parties while protecting privacy?
Calls for safeguards that allow targeted data use without compromising confidentiality.
Speaker: Audience Member 3
What regulatory mechanisms can enable sharing of non‑sensitive health data for research while safeguarding confidentiality?
Looks for practical rules to facilitate research use of health data without privacy breaches.
Speaker: Audience Member 3 (follow‑up)
What AI‑ready open data standards (metadata, APIs, interoperable protocols such as MCP) are needed to support AI systems?
Identifies technical standards required for data to be consumable by AI applications.
Speaker: Rama Vedashree
What clear anonymisation standards, public data interchange standards, and purpose definitions are required for open public data processing?
Points to legal and technical specifications essential for a sustainable data economy.
Speaker: Arun Prabhu
How can we assess the supply‑demand gap for open data and map user needs across research, startups, and sectoral regulators?
Suggests a data‑centric approach to align releases with actual stakeholder requirements.
Speaker: Rama Vedashree
How should sector‑specific data opening policies (e.g., payment systems, health) be designed, and can a federated open data strategy be implemented?
Advocates tailored, federated approaches rather than a single centralized repository.
Speaker: Rama Vedashree
What joint regulatory framework is needed with the United States (or other democracies) to manage reliance on the American tech stack and prevent risks of API pull‑outs?
Addresses geopolitical risk of dependence on foreign technology infrastructure.
Speaker: Asha Jadeja Motwani
How can we ensure that health data aggregation benefits India and does not result in proprietary AI that excludes Indian users; what models of benefit‑sharing are needed?
Seeks fair‑share mechanisms for data‑driven innovations that use Indian health data.
Speaker: Shashi Tharoor
How can the Indian judiciary be equipped with AI tools to address massive case backlogs and support the rule of law?
Highlights the need for AI integration in courts to improve legal system efficiency.
Speaker: Shashi Tharoor
What ethics code or framework is required for AI governance to address bias and other non‑legal concerns?
Calls for complementary ethical standards alongside legal regulation.
Speaker: Cyril Shroff

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges

Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges

Session at a glanceSummary, keypoints, and speakers overview

Summary

The AI Impact Summit highlighted deepening Franco-Indian collaboration on artificial intelligence, with leaders from both countries convening to showcase joint initiatives. Estelle David noted that about one hundred French firms across quantum-ready photonics, secure edge AI, mobility, cybersecurity, digital twins and green tech participated, and she cited several concrete agreements-including a strategic partnership between Dacia Technology and GT Solved, a satellite propulsion contract between ExoTrail and Druva Space, and a healthcare collaboration between H-Company and St John’s Hospital-that illustrate growing bilateral trust and investment [3-4][8-11][12-14].


Julie Huguet, director of LaFrenchTech, emphasized that France now ranks among the world’s top three AI ecosystems and that the summit serves to build bridges, share common values such as low environmental impact, and accelerate French startup growth, citing the recent Macron-announced partnership between H-Company and St John’s Hospital to improve hospital efficiency [39-44][50-51]. She also presented four French startups-Agri-Co, White Lab Genomics, Candela and Edge Company-as exemplars of technologies ready to benefit from India’s scale [54-58].


In the high-level panel, moderator Arun Sardesh framed trust as the prerequisite for AI scaling, arguing that large organisations will adopt AI only when they trust it [84-94]. Neelakantan Venkataraman defined trust as “having your back” and stressed that it must be embedded at every layer of the AI stack, from data lineage to compliance with regulations such as India’s DPDP and the EU AI Act [130-141]. Valerian Ghez (Candela) added that trust requires traceability, predictability, verifiability, security and accountability, and announced the Merlin benchmarking framework to create a shared baseline between quantum and AI communities [160-168][172-176][259-267]. David Sadek of Thales outlined four pillars-security through “friendly hacking,” explainability, regulatory responsibility and frugal AI for reduced carbon footprint-insisting that trust must be demonstrated, not merely promised [188-197]. Tanuj Mittal linked trust to scale by referencing India’s UPI system, noting that once users trust a platform, massive transaction volumes naturally follow [281-283].


The subsequent “AI for Science” session, chaired by Prof. Karandikar, stressed that AI can compress years of research into months but warned that equitable access and reproducibility remain major challenges [369-372][380-384]. Antoine Petit described CNRS’s virtual “AI for Science, Science for AI” centre to foster interdisciplinary collaboration, while cautioning about the risk of AI-generated false papers [462-470][479-482]. Joelle Pineau argued that transparency and standardized evaluation are essential to address the reproducibility crisis, and that AI itself can accelerate reproducibility through open challenges [550-558].


Overall, participants agreed that sustained Franco-Indian cooperation, robust trust frameworks embedded across technology, regulation and governance, and open scientific practices are essential to scale AI responsibly and deliver broad societal benefits [8-11][126-129][272-275][592-603].


Keypoints


Major discussion points


Franco-Indian AI partnership and concrete outcomes – The opening remarks highlighted a series of signed agreements (e.g., Dacia-GT, ExoTrail-Druva Space, H-Company-St James Hospital) that illustrate “real partnerships, real signatures and real commitments between our two countries” [8-12]. Julie later reinforced the strategic value of the summit, noting that the French President announced a new collaboration between H-Company and St James Hospital to improve hospital efficiency [50-52].


Trust as the cornerstone for scaling AI – Multiple speakers argued that trust must be built into every layer of AI systems to achieve scale. Arun emphasized that “trust is the only way to scale” and that large organisations will adopt AI only when they trust it [84-92]. Neelakantan defined trust as “I have your back and I will not fail you” and described its evolution from pilot to production, stressing architectural embedding and regulatory codification [130-142]. Valerian listed pillars such as traceability, predictability, verifiability, security and accountability [159-167]. David added “trust is not a label … it’s a proof” and outlined technical, explainability and responsibility dimensions [188-196]. Tanuj illustrated the link between trust and scale with the UPI example [281-283].


Ecosystem-driven innovation and open collaboration – The panel repeatedly called for an ecosystem mindset rather than isolated effort. Neelakantan said “the mindset of an ecosystem… we can’t do it all” [253-256]. Valerian advocated “breaking the walls between quantum and AI” and building a community through shared benchmarks like the MERLIN framework [259-267]. Julie highlighted complementary strengths: India’s “scale, speed” and France’s “deep-tech excellence, scientific force, industrial capability” [62-65].


AI for scientific discovery, reproducibility and global cooperation – The second panel focused on using AI to accelerate research while addressing reproducibility and equity. Karandikar framed AI for science as a “core pillar” to compress decades of research into months and stressed the need to bridge the digital divide [368-374]. Amit described the IRO initiative to create high-end talent, IP pipelines and industry-academic collaborations [386-430]. Antoine explained CNRS’s virtual “AI for Science, Science for AI” centre and warned about the risk of AI-generated false papers [444-482]. Joelle emphasized transparency and evaluation as keys to reproducible AI-driven science [548-558].


Inclusive, people-centric vision for AI’s societal impact – Throughout the summit speakers invoked shared values and the need to reach the “bottom of the pyramid.” Julie spoke of “trustworthy, low environmental footprint, positive impact for humanity” [46-49]. Raj Reddy called for measurable multilingual AI that serves villagers and stressed personal, sovereign edge models for privacy [294-324]. Karandikar and Irakli highlighted the digital-divide challenge and the importance of AI benefiting “all, not a selected few” [368-371][595-599].


Overall purpose / goal of the discussion


The AI Impact Summit was convened to deepen Franco-Indian collaboration, showcase French AI startups, and create concrete partnership opportunities while jointly addressing how to build trusted, scalable AI across sectors. A secondary aim was to explore AI for scientific research, promote reproducibility, and discuss policies that ensure AI’s benefits are inclusive, ethical, and globally distributed.


Overall tone and its evolution


– The session opened with a celebratory and diplomatic tone, praising high-level visits and announcing partnership signings.


– It then shifted to a technical-analytical tone, as panelists dissected the concept of trust, its architectural, regulatory and operational dimensions.


– Mid-discussion the tone became collaborative and ecosystem-focused, emphasizing community building, open benchmarking, and complementary strengths.


– The later AI-for-science segment adopted a forward-looking, visionary tone, balancing excitement about accelerated discovery with caution about reproducibility and equity.


– Throughout, the tone remained optimistic and solution-oriented, concluding with a reaffirmation of shared values and a call for inclusive, people-centric AI deployment.


Speakers

Speakers (from the provided list)


Estelle David – Representative of Business France; opened the summit and highlighted French-India AI collaborations. Area: International trade & AI partnership. [S1][S2]


Joelle Pineau – Chief AI Officer (as mentioned in the panel) and Vice President of AI Research at Meta (external source). Area: AI research, AI governance. [S4][S3]


Sandeep Kumar Saxena – Chief Growth Officer, HCL Technologies. Area: AI-driven services and growth markets.


Tanuj Mittal – Senior Director, Customer Solution Experience, Dassault Systèmes. Area: Industrial AI platforms and digital twins.


Valerian Giesz – Co-Founder and CEO of Candela (quantum-computing startup). Area: Photonic quantum computers, quantum AI. [S9]


Antoine Petit – CEO and Chairman, CNRS France (Centre National de la Recherche Scientifique). Area: Scientific research, AI for science. [S10]


Raj Reddy – Professor, founding director of the Robotics Institute, Carnegie Mellon University; 1994 Turing Award winner. Area: AI, robotics, multilingual AI. [S11]


Julie Huguet – Director of the French Tech Mission (LaFrenchTech). Area: French startup ecosystem, AI impact summit. [S12]


Amit Sheth – Founder, Indian AI Research Organization (IRO). Area: AI research, neurosymbolic models for health, sustainability, pharma. [S13][S14]


David Sadek – VP Research Technology & Innovation, Global CTUI and Quantum Computing, Thales. Area: AI security, “friendly hacking”, AI ethics. [S15]


Irakli Beridze – Head of Center of AI and Robotics, UNICRI (UN Interregional Crime and Justice Research Institute). Area: AI for law-enforcement, responsible AI frameworks. [S18][S17]


Audience – Members of the audience who asked questions; no specific titles provided.


Arun Sasheesh – Associate Partner & Country Director, TNP Consultants; moderator of the high-level panel. [S23]


Abhay Karandikar – Secretary, Department of Science and Technology, India; moderator of the “AI for Science” session. [S25]


Moderator – Unnamed conference moderator who introduced speakers and managed transitions.


Neelakantan Venkataraman – Vice President & Global Business Head, Cloud AI & Edge Data Communications, Tata Communications. Area: Cloud AI, edge computing, AI-center of excellence. [S30]


Additional speakers (not in the provided list)


Saloni – Session coordinator/moderator (addressed by Arun Sasheesh).


Mark Vialmopillier – Mentioned as the founding director of the Robotics Institute at Carnegie Mellon University (historical reference).


Julie Rouget – Introduced herself as “Julie Rouget, director of the French Tech mission”; appears to be the same person as Julie Huguet but named differently in the transcript.


Professor Zuel Pino – Referred to as “Ms. Joelle Pino, Chief AI Officer” (different spelling of Pineau’s name).


Professor Antonin Petit – Alternate spelling of Antoine Petit (already listed).


Professor Raj Reddy – Already listed; appears again in later sections.


Professor Abhay Karandikar – Already listed; appears again as moderator.


Professor Irakli Beridze – Already listed; appears again in later sections.


Professor Joelle Pineau – Already listed; appears with alternate spelling.


Professor Amit Sheth – Already listed; appears again.


Professor David Sadek – Already listed; appears again.


Professor Neelakantan Venkataraman – Already listed; appears again.


Professor Tanuj Mittal – Already listed; appears again.


Professor Sandeep Kumar Saxena – Already listed; appears again.


Professor Valerian Giesz – Already listed; appears again.


Professor Antoine Petit – Already listed; appears again.


Professor Raj Reddy – Already listed; appears again.


Professor Mark Vialmopillier – Already listed.


Professor Saloni – Already listed.


Professor Mark Vialmopillier – Already listed.


Professor Raj Reddy – Already listed.


Professor Raj Reddy – Already listed.


(Note: Some names appear multiple times with slight spelling variations; they are consolidated above.)


Full session reportComprehensive analysis and detailed insights

Opening remarks (Estelle David) – Estelle David of Business France opened the AI Impact Summit, welcoming Prime Minister Modi and President Macron at the French pavilion and noting that the week was a great opportunity to showcase French innovation. She highlighted that roughly one hundred French companies were present, spanning quantum-ready photonics, secure edge AI, mobility systems, cybersecurity, digital twins and green-tech, and that all participants share the conviction that AI is “the next frontier” [1-5]. She also thanked the Platinum, Gold and Silver sponsors-CMS CGM, Total, BNP Paribas, Capgemini, Schneider Electric and MBDA-who supported the event [70-73]. David then outlined a series of concrete Franco-Indian agreements signed during the week, illustrating the summit’s focus on “real partnerships, real signatures and real commitments”. The first was a strategic partnership between Dacia Technology and GT Solved, signed in Bangalore at the French consulate [8]. A second deal saw ExoTrail and Druva Space contract for the delivery of fourteen satellite-propulsion systems, symbolising cooperation in the space sector [9]. Additional signatures included a collaboration between H-Company and St James Hospital in Bangalore, a partnership linking North France Invest with the TIAB, an alliance between T-U-B and a leading Indian innovation ecosystem, and a later H-Company-St John’s Hospital initiative announced by President Macron [10-13][46-51]. David emphasized that these outcomes would not have been possible without the extensive network coordinated by Business France and its partners, praising close collaboration with LaFrenchTech, Numium, Yuja Advisory, the Franco-Thai Chamber of Commerce, the Indo-French Chamber of Commerce and IFKI, which together mobilised French AI champions in India [14-15].


Keynote (Julie Huguet) – Julie Huguet, Director of the French Tech mission, introduced the summit as a bridge-building opportunity and reminded the audience that France now ranks among the world’s top three AI ecosystems (San Francisco, New York and Paris) [39-40]. She stressed shared values-trustworthiness, low environmental footprint and a positive impact for humanity-and cited President Macron’s announcement of the H-Company-St John’s Hospital collaboration to make hospitals more efficient and save lives [46-51]. Huguet showcased four French startups ready to leverage India’s scale: Agri-Co (digital agriculture), White Lab Genomics (AI-accelerated gene-therapy), Candela (scalable quantum technologies) and Edge Company (autonomous AI agents) [54-58]. She highlighted the complementary strengths of India’s scale and speed with France’s deep-tech excellence, scientific force and industrial capability [62-65].


High-level panel (moderated by Arun Sasheesh) – Arun Sasheesh framed trust as the prerequisite for AI scaling, recalling the Indian Prime Minister’s “human-manner” concept and the French President’s reference to UPI as an example of how trust enables massive scale, arguing that “trust is the only way to scale” and that large organisations will adopt AI only when they trust it [84-94][281-283].


Neelakantan Venkataraman (Tata Communications) – Neelakantan defined trust in simple terms – “I have your back and I will not fail you” – and insisted that it must be built into every layer of the AI stack, from data lineage to explainability, zero-trust networking, advanced guard-railing and end-to-end governance. He highlighted the AI Centre of Excellence (AI COE) that has moved projects from pilots to production, and noted that trust has shifted from a soft guidance in early pilots to a baked-in regulatory requirement, citing India’s DPDP and the EU AI Act as examples of codified standards [115-117][130-142][135-137].


Valerian Giesz (Candela) – Valerian Giesz, co-founder of Candela, presented a five-pillar model of trust for quantum-AI systems: trustability, predictability, verifiability, security and accountability. To operationalise these pillars, Candela released the MERLIN benchmarking framework, which provides a shared baseline for quantum-AI results and aims to foster a community that bridges quantum and AI research [159-168][172-176][259-267].


David Sadek (Thales) – David Sadek outlined four complementary pillars of trustworthy AI. His team conducts “friendly hacking” to expose algorithmic vulnerabilities, ensures explainability of AI recommendations (e.g., a digital copilot’s decision), adheres to ethical and regulatory compliance (the EU AI Act and French digital ethics charter), and pursues “frugal AI” to minimise carbon footprints while developing AI-for-green applications such as aircraft-trajectory optimisation [188-197].


Sandeep Kumar Saxena (HCL Technologies) – Sandeep Kumar Saxena described how trust is cultivated within organisations. He recounted building AI-driven sales, forecasting and analytics tools for his own use, certifying every team member on AI, and launching “AI products made in India for India and the world”. At the summit he showcased seven solutions for enterprises, citizens and governments [215-224][220-222][217-219]. He argued that trust is built iteratively, through leadership commitment and demonstrable utility for customers.


Tanuj Mittal (Dassault Systèmes) – Tanuj Mittal traced the evolution of trust from a focus on model accuracy to a comprehensive lifecycle approach. He highlighted the need for data lineage, human-in-the-loop oversight, virtual-twin simulations of real-world conditions (e.g., testing a car in Indian road environments), built-in checks to prevent mistakes, and end-to-end validation from conception to decommissioning. He reinforced his point with the UPI example, noting that once users trust a platform, massive transaction volumes follow automatically [227-245][281-283].


Ecosystem mindset – Across the panel, speakers converged on an ecosystem mindset as essential for democratising AI. Neelakantan stressed that “we can’t do it all” and called for ecosystem-wide partnerships [253-256]; Valerian urged the community to “break the walls between quantum and AI” and to share benchmarks through MERLIN [259-267]; Julie highlighted the complementary strengths of India’s scale and France’s deep-tech excellence [62-65].


Transition moment – Mark Vialmopillier offered a brief tribute to Professor Raj Reddy, founder of the CMU Robotics Institute and co-winner of the 1994 Turing Award [300-304].


Keynote (Raj Reddy) – Raj Reddy, a Turing-Award-winning founder of the Robotics Institute, presented a forward-looking, people-centric vision, calling for measurable multilingual AGI that can serve villagers in their native languages and for “personal sovereign edge models” that operate offline to preserve privacy. He also urged the development of humane AI-powered weapons that disable rather than destroy, framing AI as a tool for peace as well as progress [294-324][340-347][306-312].


AI for Science panel (moderated by Prof Abhay Karandikar) – Professor Abhay Karandikar positioned AI as a core pillar capable of compressing decades of research into months, while warning that equitable access remains a major challenge and that the digital divide must be bridged [368-374][369-372].


Amit Sheth (IRO) – Amit Sheth outlined IRO’s strategy to create high-end talent, develop compact neurosymbolic models for domains such as healthcare, sustainability and pharma, and build an open knowledge-graph for drug discovery. He cited the recent FDA-approved arthritis drug developed with a pharma knowledge-graph as an example of AI-driven innovation [386-430][566-572].


Antoine Petit (CNRS) – Antoine Petit described the virtual “AI for Science, Science for AI” centre, which seeks interdisciplinary cooperation between mathematicians, computer scientists and domain experts. He warned that AI can generate large numbers of scientific papers, many of which may be false, creating a risk of wasted effort and misinformation [462-470][479-482].


Joelle Pineau (Chief AI Officer) – Joelle Pineau emphasized the reproducibility crisis and proposed two essential ingredients: transparent public release of artefacts and standardised evaluation criteria. She noted that AI can itself accelerate reproducibility through open challenges and shared benchmarks [548-558].


Audience Q&A – An audience member highlighted a trend whereby foundational scientific models are released openly while fine-tuned commercial versions remain proprietary, potentially limiting equitable access [608-617]. Pineau counter-argued that open-sourcing large models (e.g., the LAMA series) dramatically expands adoption and scientific progress, despite industry resistance [618-628].


Policy perspective – Irakli Beridze of UNICRI presented the UN-backed responsible-AI toolkit for law-enforcement, now being piloted in India, Kazakhstan, Nigeria, Oman and Brazil. The toolkit provides practical frameworks, multi-stakeholder dialogues and policy recommendations to ensure AI is used responsibly while addressing public concerns [511-538][536-538].


Conclusion & action items – The summit reaffirmed that Franco-Indian collaboration is deepening through concrete partnership deals, that trust must be baked into every layer of AI systems, and that an ecosystem-driven, open-collaboration model is essential for scaling AI responsibly. Action items include formalising the Dacia-GT, ExoTrail-Druva and H-Company-St James Hospital agreements, launching Candela’s MERLIN benchmark, continued support from Business France and LaFrenchTech for matchmaking events, IRO’s development of neurosymbolic models and open pharma knowledge-graphs, and the rollout of UNICRI’s responsible-AI toolkit in India. Unresolved issues remain around defining universal metrics for multilingual AGI, balancing open-source foundations with proprietary commercial models, preventing the proliferation of AI-generated false papers, bridging the digital divide for the poorest populations, and establishing harmonised global guidelines for responsible AI [272-275][592-603].


Overall assessment – The summit demonstrated a strong consensus on the need for trustworthy, scalable AI built on complementary national strengths, while highlighting substantive debates on implementation pathways, openness versus commercial protection, and safeguards for scientific integrity. The diverse yet convergent perspectives suggest that future Franco-Indian initiatives will need to integrate architectural trust mechanisms, ecosystem partnerships, open-science practices and policy harmonisation to achieve inclusive, responsible AI impact [84-94][130-142][159-168][188-197][259-267][548-558][618-628][511-538].


Session transcriptComplete transcript of the session
Estelle David

We were also very proud yesterday to welcome the different leaders who came for the summit and especially Prime Minister Modi and President Macron to come on the pavilion and discover the companies and speak with our companies. So as you see, through this week, the French AI delegation was actually more than what you are seeing on the pavilion. Altogether, it was about 100 French companies who came. And actually, when you will meet them, you can find in different sectors like quantum -ready photonics, secure edge AI, mobility systems. cybersecurity, digital twin, and green tech. And actually, all of them wrote, and they are all convinced and trust. that AI is the next frontier. So now just to share with you what is making this week very special.

Actually it’s as you with what I said you can see that was very intense that’s for sure but it’s not only intensity actually as you will see it’s also a lot of results achieved and results with real partnerships real signature and real commitments between our two countries. I would just name a few for the AI just maybe the first with that Dacia technology and GT solved where they signed a strategic partnership on Monday evening in Bangalore at the French consulate during the French AI night and that really shows strengthening of Franco -Indian cooperation and engineering automation in intelligence. Thank you. A second one in a different sector between ExoTrail and Druva Space, where they signed a major contract in the space industry to deliver 14 satellite propulsion systems, which is also a very strong symbol of the cooperation between France and India in terms of space.

Another signature between H -Company and St. James Hospital. And a final one that I can mention is actually a partnership between North France Invest and the TIAB that are actually uniting all together, which will create new bridges between actually one of the most Europe, most dynamic industrial region. And the other one is the T -U -B, which is actually a partnership between the two. One of India’s most powerful innovation ecosystem. So as you can see, when we see all these signatures, and I’m not just talking about AI. you can see that the dynamism between France and India is very strong but now actually when you see all this it wouldn’t have been possible without the strength of our collective network and Business France the trade and investment agency is really proud to collaborate and we have collaborated very closely with different partners with definitely LaFrenchTech and thank you Julie for the long standing partnership supporting the French startup and for bringing all these startups here in India with Numium the leading French digital and tech association helping the structure and mobilize the presence of French AI champions in India also some other partners Yuja Advisory Achoo but also the co -organizer of this event, this panel at the main summit, the Franco -Thai Chamber of Commerce, Indo -French Chamber of Commerce, IFKI.

I’m still in my… So thank you, thank you to all of you. Now we are actually arriving to today’s session where we are gathering today most influential leaders shaping the future of AI. So I won’t be long, but we are really honored to welcome Julie Huguet, Director of the Mission French Tech. Also Arun Sadesh, Associate Partner and Country Director for TNP Consultants. Nila Khan, Veta Karam, Vice President and Global Business Head, Cloud, AI and Age. From Tata Communication. Valerian Ghez, Co -Founder and CEO of Canvela. Dr. David Sadek, VP Research Technology and Innovation Global CTUI and Quantum Computing from Thales. Sandeep Kumar Saxena, Chief Growth Officer from HCL Technologies. And finally, Tanuj Mittal, Senior Director Customer Solution Experience from Dassault Systèmes.

So we’ll be really happy to hear your experience. And before I conclude, just two thanks also to our partners, because you know this event has been also been possible thanks to them. Our Platinum sponsors, CMS CGM, Total. Our gold sponsors, BNP Paribas, Capgemini, Schneider Electric, and the silver sponsor, MBDA. Again, thank you very much, all of you. Thank you to our co -organizer, IFKI, and I wish you a fruitful session. maybe just before I end also a big thanks to the teams the different teams, business friends teams but all the French team all together who worked like crazy to make this week possible

Moderator

applause applause thank you very much Estelle we now move forward to our keynote address it is my pleasure to invite Miss Julie Rouget director of LaFrenchTech Julie leads one of the world’s most dynamic innovation ecosystems LaFrenchTech representing thousands of deep tech companies and scale -ups shaping Europe’s technological leadership Julie over to you applause

Julie Huguet

thank you good morning everyone thank you I’m Julie Rouget I’m director of the French Tech mission, so we support the growth of French startups in France and abroad. I’m truly delighted to discover the tech ecosystem here in India, a country that trains around 1 .5 million engineers every year. I think it’s the highest number in the world, so I’m very impressed. The AI Impact Summit is an opportunity to create more bridges between France and India, and exactly one year ago, actually, we hosted the AI Summit in Paris. That moment helped us, helped our ecosystem to structure itself. It was the opportunity to attract investment, to unlock talent, to accelerate the creation of French startups. Today, the French tech ecosystem is strong and ambitious.

According to Deal Room, the top three AI ecosystems globally are now San Francisco, New York, and Paris. We are very proud of it and we are really sure that the AI summit helped us to build this strong ecosystem. Across France, AI is becoming a pillar of our industrial transformation. We already have major European leaders such as Mistral AI or H -Company. And I’m convinced that the AI Impact Summit here in Delhi would be as valuable for India as it was for us. For the French tech, this week in India was of course a great opportunity to showcase French innovation. But it was also an opportunity to deepen our partnership with India. Beyond business, I’m truly convinced that we share common values, trustworthy, low environmental footprint, positive impact for humanity.

We support innovation when it reinforces our economies. Of course, we are committed to making the world a better place for all of us. but also when it brings real progress for humanity. Innovation only makes sense when it serves the greatest number. And to give you a concrete example, the French President Macron announced yesterday that H -Company and St. John’s Hospital in Bangalore have started a collaboration to make hospitals more efficient and to contribute to save thousands of lives. In healthcare, in agriculture, climate, and many other sectors, Franco -Indian partnerships are key for innovation with real impact. This is why I was really happy the whole week to be here with outstanding French startups, companies already working with India, like Estelle told us a bit earlier, and others ready to build strong and strategic partnerships here.

And thank you. And maybe I will introduce a few of them. Agri -Co is transforming agriculture through digital tools that connect farmers directly to markets. White Lab Genomics uses artificial intelligence to accelerate gene therapy development. Candela is building scalable quantum technologies that will shape the future of computing. And Edge Company develops advanced AI agents capable of computer use to perform complex tasks autonomously, just like a human would. For these innovations to become global leaders, international development is key. And we all know that the world is changing. Economic alliances are evolving. We see it with Canada, Latin America, Gulf countries, and obviously here in India. Today, India represents a scale of 1 .4 billion people. 200 ,000 startups. It’s huge.

France represents deep tech excellence, scientific force, industrial capability. And I think this complementary is powerful. In France, we like to schedule meetings weeks in advance. In India, we learn to be a bit more flexible. And honestly, innovation also requires agility and perhaps a bit of Indian wisdom. That’s what we learned as well this week. And it was, like Estelle said, a very important week for the startups who came with us. So I wish you all a good session and a great day. And thank you for being here with us this morning. And .

Moderator

Thank you so much, Julie. We will now move to our high -level panel discussion, where leaders from telecom, quantum, industrial AI, cloud infrastructure, and enterprise digital transformation will reflect on how our two countries can jointly accelerate trusted AI across sectors. I am pleased to introduce our moderator for this session, Mr. Arun Sardesh, Associate Partner and Country Director, TNP Consultants. Joining Arun on the panel are an exceptional group of leaders, Neelakantan Venkataraman, Vice President and Global Business Head, Cloud AI and Edge Data Communications. Valerian Ghiaz, Co -Founder and COO, Coindella. Dr. David Sadeg, Vice President, Research, Technology and Innovation, Global CTO, AI and Quantum Computing, Thales. Mr. Sandeep Kumar Saxena, Chief Growth Officer, HCL Technologies Tanuj Mittal, Senior Director, Customer Solution Experience, Daso System With that, ladies and gentlemen, it is my pleasure to hand over the session to our moderator.

Arun Sasheesh

Thank you, Saloni. Good morning, everyone. It’s actually a pleasure and a privilege to be part of this summit and being a moderator to such an esteemed panel. I would like to start by thanking Business France, IFKI, and the AI Impact Summit organizers for giving us the opportunity to discuss something that is very important about trusted AI. So maybe I’ll start with actually what happened here yesterday. Our prime minister talked about human manner is the concept that he introduced. Our French president talked about scaling, and he used UPI, the Indian payment system, as a good example of scale. And if you really think about it, there is a large element of trust involved in it. The way that in India we accepted UPI means we trust it.

And when we trust things, scale is possible. So usually when people talk about topics such as scale or, sorry, so trust or safety, there’s a bit of pessimism at times talking about challenges. But if you really think about it, there is a large element of trust involved in it. But in this particular session, I’d like to be more optimistic. and present trust as the only way to scale. If you want the large corporations, the banks, the governments to adopt AI, they need to trust us. And only when these organizations adopt AI, we can really achieve scale. So that’s the, you know, I’d like to set the tone with that comment. And maybe, you know, in the last five years, especially after COVID, we have facing changes quite rapidly, right?

I mean, things are moving from one thing to another. We all started our career, and today we are talking about AI. So a lot of evolution in our lives as well. So I want to start from that point to introduce yourself, but also tell us. The evolutions that you have gone through, and how do you define trust? Maybe we’ll start with you, Neil.

Neelakantan Venkataraman

Thank you. A very warm good morning to all of you, and thank you, Business France, for having me here. It’s a pleasure to be here and talking to all of you, and hopefully we’ll have a nice interaction. So personally, you know, we’ve been… So just to introduce myself, I head the cloud business for TataCom, which includes your general purpose cloud. Now AI cloud. Edge and dedicated private clouds for our enterprise customers. We are an international company. 80 % still comes from India, and 20 % comes from outside of India. So we were… As part of our cloud business, we did have a large AI ML offering. And about four years back, when suddenly the transformer architecture came into the scene, and we were able to do it, we were, you know, we didn’t know about it at all.

Actually, we were, I would reckon that we were like, we didn’t know about it at all. And so when it came up, you know, we thought, what is this new architecture which has come up and how it’s going to impact? And OpenAI and ChatGPT came up. And then we started thinking how we’re going to apply this to our businesses internally and also how we’re going to offer it as a service to our customers. So our journey has been a journey of learning a lot in the last three years, I would say. All of us are learning and it’s been pretty fast -paced. It’s been pretty steep in terms of technical. We had to, you know, through the organizational levels, right from the CEO to the bottom most, we had to do learning of what will it take for this new world to adopt Gen AI and how do we adopt Gen AI within the company and how do we adopt Gen AI within the company and how do we adopt Gen AI outside and offer it to our customers.

So tremendous scale of changes and the potential for innovation for our customers and for the company. So now we have established an AI COE within the company about three and a half years back. We had a lot of pilots which were going on within the company, and now they are into production. And similarly for our customers and enterprise world and beyond enterprise government and institutions which work very closely with government, who work on citizen -scale projects, all of us have seen that, right? So truly in the last five years, it’s moved from, I would say, POCs and pilots to now production. And production at an entry level. I would say scale. It is yet to be achieved.

It’s production. to say that, okay, there is a return on investment in the enterprise context and there is a reasonable outcome for citizen scale projects. And therefore, we should start putting it into production and then, of course, scale it. And scaling means that trust has to be put on steroids. So let me talk about trust now. So I would, you know, describe trust as something which is, in a very simple word, I have your back and I will not fail you. That’s trust. You know, beyond that, there’s nothing. So when we deploy these systems, the stack, and then when we deploy the use cases and the applications, you know, inherently, trust has to be foundational element.

It cannot be a bolt -on on top of what we have built. So it has to be built at every layer. And trust has also evolved within AI system. In the last five years, you know, it started off. by, you know, because it was a POC pilot, so you’re not really exposing it to the end users in a big way. It was in a closed group, user group, and therefore it was more of good to have. But now it’s moved to foundational, it’s more architectural in nature, right? Every element of the architecture needs to have trust built in. And from a regulatory point of view also, trust has also evolved, right? So, earlier it was all about, okay, a soft guidance on trust, saying that you need to be, you know, ethical, you need to have transparency, but now it’s in the, baked in into the regulatory policies and requirements, whether it is the DPDP, which has been operationalized in India, or the EU AI Act, which is already operational.

So now it is, you know, it is in black and white. And from a technology point of view, as I said, trust is foundational, it is architectural whether you have explainability built in in terms of the outcomes, whether the behavior of the systems is predictable it is explainable, you should be able to explain, it should be auditable the data which is fed into the models and trained and the inferencing happens and the outcomes which happen you need to have a very clear data lineage, you need to have end to end governance and we talked about edge computing, I think we talked about edge so you need to have governance, end to end governance, we talked about billions of devices which could be inferencing at scale and therefore whatever happens in the cloud and what happens at the edge, you need to be able to you know the entire workflow and the process has to have end to end visibility in terms of the governance and finally resiliency is also trust, it should not be broken, so from Tadak’s communications point of view when we talk about trust being the bedrock and foundational element of AI And therefore, it will scale while you put it to production.

We meant at every scale at the infra level, we build in some of the trust components, including, you know, zero trust networking, because, you know, networking is the invisible layer which carries data across AI platforms to the, you know, the software layer and the platform layer. We have advanced guardrailing technology, data lineage, data governance models, and the entire end -to -end data pipelining and management. So I’ll just hand it back to you. Long answer. Sorry for that.

Arun Sasheesh

No, no, not at all. It’s very important. And, you know, for us, Tata is synonymous to trust. So I have to mention that. So, well, you know, being a French company, I know about Quandela. But what do you like to talk about Quandela, your evolution, and how do you define trust in a quantum computing perspective? Thank you

Valerian Giesz

very much. Yeah, so maybe you know, I will just introduce a little bit Candela. It’s a startup coming from the CNRS lab. We use CNRS technology to build photonic quantum computers. Actually, we are a full -stack company developing software and hardware. And now, actually, we partner with industries like Thales to move quantum from the lab to industry, to the real world, and to deploy systems. And basically, as a CEO, trust is a key, is a pillar in our roadmap because actually we need to build reliable systems. We need to demonstrate compliance, security in order to demonstrate scaling. That’s very important for us. So for me, when you asked about what means trust with my vision, and I’m an engineer, basically, it’s easy.

First, trust. It’s trust. Trustability. Trustability because we need to trace the systems, the models, the data that we use for AI. Even for quantum, we use quantum artificial intelligence, we develop quantum machine learning. And for all of this, it’s important to trace the results and to get reproducible runs. Second thing will be predictability. Predictability is you need to know basically where are the limits of the models and where are the failures as well. And this is also why it’s important to investigate this. Verifiability is the third one because we need to benchmark the performance. Actually now we are at this step. At Candela we released a framework which is called MERLIN for machine learning. And it’s very useful.

And it’s very useful. It’s used to benchmark applications and performance on quantum computers and using AI techniques and run stress tests on the applications. Fourth, security. And the fifth pillar, which is accountability as well. How to make sure that we have a clear ownership along the value chain of AI on quantum computing between hardware providers, software providers, certificate providers. We need to have clear ownership about everything. And with this, all together, we will be able to work in trust. We will be able to build the trust for the end users, and we will be able to scale. That’s for me. Thank you. Thank

Arun Sasheesh

you, Valeria. And Dr. David, you are in charge. You are in charge of AI and quantum computing at Thales. Both evolving topics. How do you see this? And what is trust for you? You have multiple… topics in hand so hello

David Sadek

team doing what we call friendly hacking, which actually friendly attacks our own algorithms to identify their breaches, their vulnerabilities, and to propose countermeasures. And by the way, this team won a challenge from our MOD, French MOD, two years ago because the team succeeded in retrieving sensitive data which were used to train the system. The third pillar is explainability of our system. So, if you have a digital copilot in a cockpit recommending to a pilot to make a left in 45 miles, for example, so the pilot should be entitled to ask the question, why should I do that, especially if she or he has had in mind to do something different. And the system should be able to answer because there is a threat, there is a thunderstorm, and not because the layer number three of the neural net was activated at 30%.

Okay? and finally the fourth pillar which is last but not least is what we call responsibility and responsibility actually is twofold there is one stream uh which is the uh compliance of ethics principles of laws of regulation principles as you know in europe we have this ai act and talus also issued a digital ethics charter a few years ago which comes in 10 commitments actually we are really working to achieve it’s on our strategic roadmap business roadmap now and the second stream is about the uh uh full carbon footprint and energy consuming so we have teams working on frugal ai to minimize the volume of data which are used to train systems for example this is minimizing the the footprint of the technology itself ai technology And we have also the complement of this is what we call AI for green, how to use AI to minimize the footprint of applications like working on optimizing the trajectories of aircraft, for example, to minimize what we call the condensation traits which are generated by the aircrafts.

So just to conclude this first part, I would say that trust actually is not a label. It’s not a promise. It’s a proof. Things have to be proved in our business. Thank you.

Arun Sasheesh

Thank you, David. Sandeep, coming to you, we are in the service industry. Our whole operation is built on relationship and trust. So how are you coping up with these new challenges? This of new technologies coming up, what’s your take on this?

Sandeep Kumar Saxena

Thank you. Thank you for inviting me here. So it’s a very valid question. And I will not answer it in a very technical way because I’m sure all of you have covered all the aspects around technology, architecture, governance. So my name is Sandeep. Been in London for the last 24 years. And I’m moving to India next month to accelerate the India business. And, of course, when I was in, I was managing the European business for HCL Tech. We’re just about a $15 billion company providing services. Services, and I took this job of growth markets, too, which is India, Middle East, Africa, France. It gave me a very different perspective because I’m managing about $1 .5 billion business. And now here I come in a completely different world.

And I started like a startup. So I built my own systems, which was based on AI. Like we say, before you preach anybody. You learn yourself. so I built all my systems today for growth markets too which is what I lead is built on AI so my inside sales engine my business analytics my forecasting everything is based on AI so I have reached from analytics to reasoning I am hoping I will reach to predictability in some way because the agents are still not predictive they are still reasoning but that’s where I started so if you look at my business and every person in my sales team or my delivery teams is certified on AI I myself started it, see if you have to embrace AI, it starts from the top, starts from the leader and we talked about trust, it starts from you if you as a leader in Vive there is no excel sheet in my world there is no powerpoint in my world you ask a question using voice you get an answer on a dashboard I can show you right here of course I will not tell you what is my forecast for this quarter but you ask a question you have it you ask a question about a company you get it in 2 and half minutes and that is the power of AI we were having you know earlier lot of people trying to dig data from here from there it doesn’t exist it is 2 and half minutes you ask for the market approach or anything that you want to do so in my view imbibe yourself it is an iterative process you do not build trust just like that you build it over a period of time you have to be patient you have to learn you have to make somebody learn and that is the learning process that continues over a period of time and then you build trust.

So my advice to anybody, and the reason I moved to India is very exciting. It’s a land of opportunity, saying, coming home. And you are in NCR, which we call it Delhi. It is the home of HCL Tech. So we have a very unique proposition or advantage in India or globally, which is we have what we call as AI products. Very proudly, it is made in India for India and for the world, which is HCL software. We have expertise of our global services, working with a lot of customers across the globe. So what it gave me an opportunity is to bring AI products, services together into what I call as AI solutions. so in this AI impact summit we have lost 7 solutions which is not just for enterprises it is for citizens it is for the governments as well more than welcome hall 4, 4 .5 please if you have not visited go and visit what we are talking about so these are the solutions which will make you know it will help us protect ourselves, fraud detection system, compliance system, training system, skilling system, not just enterprises so to me AI is about people progress and planet thank you

Arun Sasheesh

coming to you Tanuj Dassault is such a flag bearer of French innovation how do you how do you see this whole evolution and what is trust means at Dassault thank you

Tanuj Mittal

Arun and good morning everyone I represent the systems which champions the cause of industrial AI platforms. Now to this point of trust, the definition, the expectation itself has evolved I would say over the last several years. Five years back, for example, AI was still in silos and the definition of trust was mostly centered around the accuracy of the output. So you have a model, you feed data, you put a query, if the results are near to your expectation you are happy. But that is no more the situation because of widespread understanding of AI as a topic and adoption as well. Now there are new dimensions which got added to make it trustworthy and quite a few points which I wanted to highlight.

I think the highlight is already covered with my fellow panelists but for the sake of clarity and at the cost of repetition I will say it again the first one is of course the lineage of the data so the AI platform the industrial AI platform needs to ensure by design that the data which is being leveraged to solve a problem is ethical it has traceability there is no mischievous data which is being leveraged that done when the output comes it is credible and it is trustworthy by the people who are going to use it the second point which I wanted to highlight is about people in the loop we still have to go a long way where we trust a totally automated system without human intervention we still like to have at least at the governance level, people in the loop who will ensure that the processing, the output given by the machines is indeed in line with the objective for which it was created.

100 % trust only on machines is still a little far. So people in the loop is definitely which built trust for all of us. Another aspect and particularly in an industrial AI perspective is to simulate the result of an AI model in a real world environment. For example, when you design a car, you design a car in context. The car has to run on roads and the condition of roads changes from place to place. And if you really need to trust a car, which was, for example, developed. elsewhere in the world but being used in India, people will trust if that car at least is tested in the real world environment of India as a context. You have virtual twins of not only the product now, for Dassault system you also have virtual twins of the environment.

So you can simulate how that car will behave when it actually gets on road in Indian conditions. That builds trust. Another example is what kind of checks and balances which are there in the model itself that it does not let you make mistake whether the mistake is unintentional or whether it is deliberate. What kind of compliance you have already built in the model. If that is robust, the chances of getting a wrong output or a broken output is very low. The system is very robust. is far lesser and that builds trust. And the last one, point which I wanted to highlight, AI applications, unless it is end -to -end, from conceptualization to decommissioning, if it is still in silos, the overall output is less trustworthy as compared to, imagine a situation where right from conception up to decommissioning, you have been able to simulate the whole process multiple times again, prove it, streamline it, and then launch it.

That builds a lot of trust for the people who are actually going to build that system in the physical world and the consequent people who are going to use it. So these are some of my views. Arun, back to

Arun Sasheesh

Thank you. Thank you, Tanuj. I think we have some more time, but I’m glad that a lot of you guys, all of you, in fact, went. Thank you. The deep strength of French innovation, French technology, and two star walls of Indian scale and speed, in a way. So I just maybe quickly want everybody’s point of view on what is the mindset change that you are looking for to build trust and the democratization of AI at scale. So what is the mindset that you are looking for, a change of mindset, Neela, quickly?

Neelakantan Venkataraman

I think I would say that the mindset change which we have to move towards is a mindset of an ecosystem. Because we can’t do it all. For example, we partner with Thales on many of the security components which we provide as part of a solution. So it’s an ecosystem play. And we need to work very closely to make… …make sure the trust is not broken. and the trust architecture is maintained across the ecosystem.

Arun Sasheesh

Valerio?

Valerian Giesz

I think on my side, priority should be to break the walls between quantum and AI and build a huge community. And also this is why at Candela we released Merlin, which is a framework which aims to do that. Because that’s the point. Trust comes from benchmarking and reproducibility and not from one -off charts. And Merlin has been released with one very pragmatic first mission, establish trust between AI community, AI developers, using quantum computers that are brand new technology, which is now available. And we actually published some reproductions of papers. We are here to show quantum machine learning results in a controlled environment. We are turning scattered clays, names into… shared baseline and to build a community and invite people to use them.

So, yeah, my main topic is let’s break the walls and let’s share about what we learned in order to establish trust all together and build a common baseline, especially between France and India. In France, we can develop the technologies. In India, we can scale the technologies. So we have an ecosystem and a community.

Arun Sasheesh

What’s your take, David?

David Sadek

Well, I would say that in France, we have spent like decades to build something which is really supposed to work in context where failure is forbidden. I mean, with companies as Thales, as Dassault, as Airbus, and it has taken us, you know, decades to do this. and so we are living in a world of certification, of regulation of mathematics proofs so trust has to be proved this is very important we cannot afford as I said earlier that you know just declare trust, say ok please trust us when you deal with critical systems you have to prove the trust and I used to say that trust is gained by drop and is lost by bucket so this is very important and in India has been doing something equally extraordinary I would say in record time with this digital infrastructure for billion human scale which is really extraordinary and I think that the combination between depth and scale between France and India is really the very challenge here.

And to keep trust within this challenge is probably the way to go to make people adopt AI at large scale. Thank you.

Arun Sasheesh

Sandeep, for you. Can you just say one word?

Sandeep Kumar Saxena

Yeah. Just be open -minded and learn to adopt change. Adaptability. Very simple. There is nothing else.

Arun Sasheesh

And you, Tanul?

Tanuj Mittal

Yeah, quickly. The scale is directly proportional to the trust we built in the system, for sure. Yeah. And I’ll build on the example you gave initially and our prime minister also quoted. UPI, when it was launched in 2016, last year in December, it clocked some 21 billion transactions, translating to some 30 lakh crore worth of money transactions. with each other and today UPI is being used even by the most digitally illiterate person in India he doesn’t hesitate to put his trust in a system with his money so if you build the trust then the scale comes automatically

Arun Sasheesh

thank you gentlemen I think we are almost finished our time thank you very much I encourage you to meet with the speakers and thank you very much for your time

Moderator

thank you once again to our moderator and to all our distinguished panelists I would now invite all the speakers to please remain on stage for a brief momentum presented by Mr. Mark Vialmopillier and for a group photo ladies and gentlemen please join me in applauding our speakers as we take this moment together Thank you. He was the founding director of the Robotics Institute at the Carnegie Mellon University and he was instrumental in helping to create the Rajiv Gandhi University of Knowledge Technologies in India to cater to the educational needs of the low -income gifted rural youth. He and Edward Fonningham won the 1994 Turing Award, sometimes known as the Nobel Prize of Computer Science, for their exemplary work in the field of artificial intelligence.

Now, I now request Professor Raj Reddy to take the stage to deliver his keynote. note.

Raj Reddy

phone in your pocket, it was listening to you and using it to guide your discussion. I’m hoping we’ll create user -friendly interfaces so that when I speak in Telugu, you can hear in Hindi, and when you speak in English, I can hear in my preferred language. And I think we are there. We can get there very quickly. And it’s being done already. There are two startups in India called Sarvam and Bharat Jain. Both are trying to do it. My request is that we create a quantitative measurable matrix. That we have achieved this goal. What that means to me is, it’s not enough. Already people will say, we already have multilingual intelligence. We have systems that will speak, and you can speak in one language.

But it’s not usable. It is not, especially if you’re a person in a village, and you don’t even know where to begin. So the first issue is, how do we create a multilingual AGI, and how do we make sure that we have measurable progress? There’s a statement, if you can’t measure it, you can’t improve it. We need to improve the existing models, and they will probably need more computation, more memory, and more bandwidth. In the 50 years ago, we created a thing called 3M computers, MIP, megabyte, and… megapixel. Today, we should create 3T computers, a terabyte of memory and teraflop of computational power and terabit bandwidth. That’s where we should aim for. That means every one of us should have in our pocket an AI companion that actually has what we call foundation edge models.

And they require not, right now, the many models that are on the edge are like three billion bytes or nine billion bytes. We’re off by a factor of 100. And we need to get there. And India can kind of, where am I? How am I doing for time? Anyway, somebody, it used to be that there’ll be a time map. thing here but whenever it is time tell me I’ll stop okay so that’s one the second important point I want to make is people at the bottom of the pyramid most of the talks I’ve heard most of the expectations assume you are AI enabled and you can actually make you effective use of AI I come from a little village I guarantee you not one of them knows anything about computers or AI and they simply you know are not going to be benefit from this whole technology so what we need to do is just like the agricultural revolution of some Swaminathan we need to figure out a way how to get this technology to people at the bottom of the pyramid.

Again, I’m sure you’ll find, I’d be happy to talk about any of these for much longer, but we only have a short time. Then, in order to do both of these things, I said we need a teraflop, terabyte systems, and what we need are personal sovereign edge models. And currently, if you talk to anyone, they’ll say, already we can have access to AI. It is not private. It is not, you know, personal and secure. We need systems because they’re always going to the cloud to access the AI models. As soon as you do that, you have no privacy. In the future, we want systems which are personal, autonomous, and can be used to do things.

So, I’m going to talk about the AI model. cognitive assistants that are always on, always working, always learning. And that is the challenge of how to get there without… We have to cut it off from the grid. We cannot let it go to the grid because then it’s no longer private. And so anyway, there is a whole set of issues of that kind. How much time do we have? Anyway, somebody tell me. There are three or four other topics we can talk about. One is, I had a child come and say, if AI is going to teach me and knows everything, why should I go to school? Yeah. And so the answer to that will take longer than two minutes, but I only have two minutes.

But you can figure it out. But basically what we need… to do is essentially teach the kid learning to learn using AI, have a dialogue, learning to think, you have to teach them critical thinking. Right now, most kids in India don’t even open their mouth in classrooms. They’re afraid. So we need to kind of get over the barrier, let them talk and think and go through critical thinking and learning to do. You have to learn how to execute. With that, I’m going to stop, but I want to leave you with one other thing which you can figure out. One of the things I remember from Vedas is Om Shanti Shanti Shanti. Peace. . . . . . . . . . . . . . . . . . one of our keynote speakers, they said autonomous weapons are going to AI -based autonomous weapons are going to destroy the world.

That’s a risk. Why don’t we have humane weapons? When a missile is going to hit a hospital or a school, it is easy with AI to discover that and deflect the missile. Why should we even kill the soldiers? They’re innocent. They’re just somebody recruited and they’re being bombed and killed. We should build weapons, humane weapons, that will disable them rather than destroy them. There are lots of very interesting issues of this kind. We need to think about that. Thank you. Thank you. Namaskar.

Moderator

A very good morning, ladies and gentlemen. Our next session is a panel discussion on AI for science. The panel will be moderated by Professor Abhay Karandikar, Secretary, Department of Science and Technology, and he’s also the chair for the AI for Science Working Group. I would now request the panelists to please come on the dais, Professor Karandikar. The other panelists for the session are Mr. Irakli Berids, Head of Center of AI and Robotics, UNICRI, Professor Abhay Karandikar, Professor Antoin Petit, CEO and Chairman, CNRS France. We have Ms. Joelle Pino, Chief AI Officer. And we also have Mr. Amit Sheth, Founder, Indian AI Research Organization. A very warm welcome again to the panelists. I will… Right. Group photograph.

Okay, I request all on the dais to please come forward for a group photograph. We’ll have the photograph for you on your mementos. Thank you, panelists. Thank you, Professor Karandikar. I now hand it over to our moderator, Professor Abhay Karandikar, Secretary, Department of Science and Technology, to carry forward the panel discussion. Sir, over to you.

Abhay Karandikar

Thank you. Thank you, Ekta. So, distinguished panelists, we have a very distinguished panelist today on the panel, colleagues and all the members of the global scientific community. It is my pleasure to welcome you to this panel on AI for Science, and we consider it to be a very core pillar of our vision for this India AI Impact Summit. And as today we stand at the threshold of a new research, paradigm, our goal is not just to witness the AI revolution. but to steer it towards a more equitable, inclusive and transparent future. You know, in today’s AI world, we are moving beyond traditional methods where AI -driven models and automated experimentations have a potential to compress the decades of research into months.

And the rapid advances of these technologies, however, has not been so far equitably distributed and that is one challenge. Many regions still face the significant barriers. But still, the realm of possibility for using AI for scientific discovery continues to have, you know, a lot of excitements. Today, we are joined by leaders who represent the entire spectrum of scientific innovations, policy makers, institution builders. and from the governance and national research ecosystem. I look forward to the panelists’ insights on, you know, what are the exciting possibilities in AI for science and how we can bridge the digital divide and build a genuinely reciprocal global scientific ecosystem. So with this, I think I will begin with, you know, first a few questions.

I will request the panelists to answer. Of course, they are free to elaborate on any other things. And then I think we will open this floor to the audience for the introduction. So let me begin with, you know, Dr. Amit on the far end. So, Amit, you have been building, I think, IRO as a national -style institution in India. If you can just tell, you know, how can this be a national -style institution in India? How can this model? help overcome the specific barriers that we have identified in this region, you know, such as inadequate compute and fragmented data sets. And also, you know, I would like you to elaborate how can we ensure that AI research which gets conducted in our center of excellence actually can reach the translational stage addressing the real world challenges.

So if you can just, you know, take five to seven minutes on this. I think you can just do this.

Amit Sheth

Hello. Yeah. Thank you very much, Professor Karandika. This is a perfect question for me to talk about. This is why I’m here. I moved from USA after 44 years here to address, exactly the question you asked. I was on. Two days ago, I was on another panel, and I asked this question to the audience. How, if I were to be the founder of DeepSeek, had all the funding that he had and has, can I find those 200, 250 engineers, AI engineers and researchers that he had access to, to build DeepSeek? Out of around 100 people in the audience, three people raised their hand, saying, yeah, we might, we may. Of those three, two were students. So only one, you know, mature person basically thought that we can have that.

And I think that gives an answer of what we need to do. So India is well on its way, I mean, to grow. Many people who know something about the AI. and they will certainly have the ability, the skills necessary. Say, India has been big in IT services and whatever IT services need, they will be able to supply. The skill set that people would have here, that would be adequate. But we have noticed that two members, very important members of IRO’s board are Ajay Chaudhary and Sharath Sharma. And they have extensively talked about or lamented that India has not been a product nation. They have not made any global products. Virtually, I mean, hardly, you know, any global brands exist, have been developed in India.

And for that, we need more than skills. We need people at high end of expertise. That means our own indigenous research capacity, our own ability to train innovatively. And that’s what we need to do. And that’s what we need to do. A very good model has been that, you know, we do bachelors here. Take an example of Arvind Srinivasan. He did IIT Madras. Then you go outside. He did his PhD in Berkeley. I did mine in Ohio State. And then he worked for companies, three companies, DeepMind, OpenAI, and Google. And then he did his company. But that also in U .S. We want that to be done here, right? So the same ecosystem in which he got trained after leaving India, we want to provide that in India, right?

And there are, I think, a lot of things happening. As you know, there is a 40 % decrease in Indians going to the United States for studies. And that will continue for a while now, right? With most of you. You know of the results. You know of the results. So, first and foremost, IRO is developing an environment to create high -end talent of innovators. Secondly, and by the way, if you see, IRO’s founders are professors who have graduated nearly 200 top -end PhDs. So we know how to create that. Secondly, we have created a broad variety of collaborations with various universities, and we are starting to do that in industry. And we are creating a significant infrastructure to support IP creation, to licensing that, or to work with the corporates and startups to who will make the products.

So the idea would be that we’ll co -innovate, join. We’ll jointly work at IRO with the companies, with the startups, with the entrepreneurs. and we have already lined up large amount of investors, angels, seed, as well as growth stage. They are all hungry for deep tech AI startups and that we will provide comprehensive environment for us to take. Now, some of us also, founders have also done companies. Three of my four companies that I have done are AI companies licensing the research I did in my university. Ramesh Jain has done more companies than I have, and he’s also a co -founder. So we have the understanding of that entire pipeline it takes from lab to global products.

And so this is what we are going to do for India. And this was it. Okay. Thank you. Thank you.

Abhay Karandikar

Now, let me just switch the gears and go to Professor Antonin. You have been the chairman and CEO of CNRS France. so I think CNRS as you know operates at a scale you know that most research organizations can only imagine so two questions what do you think what structural shift the national research and funding agency need to make to support the interoperable scientific ecosystem that can sustain AI research beyond just short term pilot and so the added question is that is there a need to build an AI for science platform like as a mega science facility

Antoine Petit

so thanks for this invitation yes two words about CNRS CNRS in French means Centre National de la Recherche Scientifique and probably you don’t need an AI translator to understand that it means National Center for Scientific Research and And it’s true that we’re a big institution. We employ more than 35 ,000 people, among which 30 ,000 scientists. And we cover all fields of science. And clearly, AI opened a new era in science, in some sense, because AI is not only an accelerator of existing techniques. It forces us to imagine new ways to do science. Just to illustrate this, if you look at material sciences, what I will see is roughly, before you define new materials and then you study the properties of these materials.

Now you say, I would like to have a material with such properties. And then thanks to AI, you will build the material. With high probability that it will verify these properties. So in some sense, you see, it’s not that global acceleration. It’s a reverse, in some sense, of a way to do science. And this opens a new era in which you need really to have talents, of course, but you also need cooperation between different sciences. And that’s probably a challenge for an old institution, if I may. Like CNRS, we were organized classically in science. We cover all sciences, including humanity and social sciences. But you see that with AI, you need really new ways to cooperate between scientists.

And this means that, as usual, the key point is talents. And it means that we have to build ways to push people to interact. And that’s why we created, some years ago, a virtual center called AI for Science, Science for AI. And that’s why we created a virtual center called AI for Science, Science for AI. And we have to create some kind of virtual loop. And that’s why we created a virtual center called AI for Science, Science for AI. between, in some sense, producers of AI, mathematicians, computer scientists, and consumers of AI, which can come from every discipline. But the trick is that this producer will not produce tools or software that will be simply used by consumers, but consumers will have new, in some sense, new attempts for new ways to do research.

And that’s clearly something we try to do. And, of course, in addition, we need absolutely to have computer facility at the highest level, even if we also try, as a lot of people try to work, to have more frugal AI in order to not have a carbon footprint which will stop to develop this AI. And so that’s clearly a challenge for a center like Celeris, but I know that it is a challenge all over the world. And probably a key point is to really start from scientific use cases in order to, as I said, to rethink the way to do science. So do we need to have a platform for that? I don’t know. We clearly need to have cooperation.

That’s absolutely key. And Celeris, we have a long tradition of cooperation with India and with DST in particular. And clearly, from my point of view, the way I feel India approach AI in a very, very pragmatic way can be an example for us. You really try to apply AI for your citizens. And in some sense, for science, I think that… The process should be the same. we should start from very pragmatic scientific questions in different fields and to see, thanks once again to cooperation between data scientists, computer scientists, mathematicians and colleagues from the other fields, how we can apply AI. But also for science, AI for science has also some risk. In particular, you can produce a lot of papers thanks to AI.

And it’s not clear whether these papers were right or not. And in some sense, we can lose all our time by producing false papers by AI and then refaring these papers also by AI. And that’s a difficulty we all face. I think that none of us has a solution right today. But… But it’s clearly also an issue, but… be optimistic and let us think that AI for science once again will allow us to make progress and to discover also new results but also new ways to access to these results and in particular there are right now fascinating applications of AI to mathematics a bit frightened in some sense because new results have been obtained in mathematics without the help of any human and does it mean that AI will replace scientists I

Abhay Karandikar

ok so do you think AI will replace scientists or it will act as a co -scientist or a hybrid scientist that for me so let me just introduce I think

Joelle Pineau

Professor Zuel Pino so you have an academic background as well as you are now a chief AI officer so you have worked in the industry street as well. So just your take. the properties of new crystals. And in this particular case, once you’ve done the ranking, you take your top -ranked candidates, and you still need to run them through a wet lab to verify the properties. Your mathematical model has some imperfections, some approximations, some errors. But by having the ability to rank the candidate’s solutions, you cut down the search times drastically. In the old days, you had to list the list of possible solutions, and you had to test them one by one in the lab using your intuition of the order in which to test them.

But now you have a ranking algorithm that tells you in what order to rank them. So for those of you who remember the web pre -page rank algorithm where the search tree to find a website of interest was incredibly long, and all of a sudden you had a good ranking algorithm. It was a complete game -changer in order to retrieve information. And now it’s a complete game -changer in terms of finding candidate solutions to problems in AI. And so this process that I described for this one case applies across… across all sorts of other areas, whether it’s biology, whether it’s mathematical theorems, and so on and so forth. So this is not like magic. There is like an organization to how you take the data, how you use it in a generative model, how you do the ranking, and then how you verify your solutions.

And the verification process changes depending on what the domain is. In some cases, the better your model of the data, and we hear a lot about world models, the ability to predict the properties of the system means that you can accelerate further the discovery. However, you get better ranking, and you have to take fewer solutions to the lab. And so that’s just to give you a sense of how to use it in practice to make this a little bit more concrete for people. Thank you. Now let me come to Dr. Irakli Behriz. Irakli leads the United Nations Interregional Crime and Justice Research Institute Center for AI, where he manages one of the first sort of UN programs dedicated to AI research.

So, Irakli, what did you do? What is your take on the, you know, this risk versus benefits, you know, if you see that in your experience this AI for science can potentially pose and what, you know, even other speakers have raised?

Irakli Beridze

Thank you very much. Thank you for the question and thanks to the organizers for putting this together and inviting me to the panel. It’s a really pleasure to share the panel with the distinguished speakers who spoke before me. I will give some reflections what we are doing and how we’re looking at the discoveries of the science, including the social science and other things, how it translates into the policy developments at some of the United Nations streams and how we are working with that. So I’m leading a center for artificial intelligence and robotics for one of the UN agencies called UNICRI. And our mandate is anything related to AI. Crime prevention, criminal justice, rule of law, human rights, AI literacy now.

The center itself opened in 2017 in The Hague in the Netherlands, and we have a global mandate supporting law enforcement agencies all over the world to use AI and in a responsible way. We develop specialized toolkits and policy frameworks for that. We also support investigators to use AI to solve concrete crimes. And at the same time, we are assessing risks, how criminals and malicious actors can use artificial intelligence, and how we can support sort of global frameworks to ensure that AI is used in a beneficial way and risks are mitigated properly. So this is the type of framework what we are doing. A couple of questions now sort of starting from the broad side, from the United Nations.

Obviously, UN just approved a scientific advisory board. This is an extremely positive development. And just an hour ago, there was a panel about science related to the AI governance and how it is so crucial to understand and especially for the policy makers and sort of broader audience what we are trying to actually govern and what we are hoping is that the Scientific Advisory Board is going to do just that and quoting Secretary General of the United Nations who said that policy should be as smart as the technology it aims to guide and it is so true and right now there is quite a lot of sort of misconceptions and misconnects in that sense. Now a little bit about the law enforcement and how sort of how we are looking at it.

There are a number of things and there is a lot of aspects that could be touched upon. Several years ago when I started the center itself and we started sort of our programs especially on the responsible use of AI by law enforcement, most of the law enforcement agencies were not using AI. We are talking about back in 2018. or they didn’t even know what were the tools. And we had sort of a really handful of examples here and there. And now, last summer, we conducted regular global meetings, AI for law enforcement, and this one was hosted in Brazil. And we had so many use cases that we didn’t know actually sort of what to showcase. Right?

On the one hand, this is a really good development. So we have law enforcement needs to use AI and it needs to solve problems. And right now, without AI tools, the vast amount of data which exists there cannot be interpreted, cannot be put in place, but at the same time, it has to be done in a responsible way. So what we are doing is that we’re developing specialized toolkits for responsible use of AI, and that involves the multi -stakeholder dialogues. And we bring scientists there, we bring law enforcement agencies, governments, and academia to put together those findings and frameworks so that… this could be applied directly in the policy translation. So India is one of the pilot countries right now.

We have five countries where this toolkit has been implemented and this is India, Kazakhstan, Nigeria, Oman and Brazil. A couple of days ago we had a meeting at the Central Bureau of Investigation and we understood that there’s a lot of progress already made in the implementation of this particular project. At the same time we are, we have launched a rather sort of a scientific project on how to ensure that public trusts use of AI by law enforcement and in a few weeks we’re going to issue policy recommendations and the report which comes out of it which is again a very crucial form of the governance of AI in that particular field where AI is being used.

AI has been used by law enforcement but public has a fear to it and has a misunderstanding. perhaps or right understanding on how it is being used and applied in reality. So all of this stuff is being happening there. Thank you.

Abhay Karandikar

Thank you all the panelists. I think before we just open, I just had one quick question not in any order, but just to Dr. Pino, I had this question for you since you made a very important point of AI to be looked at as an instrument. Now, you know, one question I had is that there is this reproducibility crisis in science. You know, so what do you think? Do you need any standard or any methodology so that, you know, AI generated discoveries are considered, you know, as real or as reliable as, you know.

Joelle Pineau

I do appreciate the question. I’ve been in I’m quite concerned about the reproducibility more generally in the field of AI for a number of years, starting at around 2018, and have published quite a few papers specifically on this topic of reproducibility. I’ll keep it very, very short. I do think this is an issue. I do think AI can be an instrument to accelerate the reproducibility of scientific findings, because specifically in those cases, the question is already there often. There’s a candidate methodology, and so that means we can apply the wheels of AI in using reasoning methods and generative methods to accelerate reproducibility. We’ve looked at doing that and running reproducibility challenges. I’ve run an annual reproducibility challenge around some of the AI conferences, and so I think there’s a lot of opportunity there to do that.

I would emphasize there’s two ingredients that are necessary, which often are associated with discussions of responsible use of AI. One. So that is transparency. So to facilitate reproducibility, it helps to have the artifacts of the scientific process be publicly available. and the second one is evaluations. And so just to reproduce a method without being very specific about how you’re going to specify the criteria can be difficult. So I think by spending some time on transparency and evaluation, we can really facilitate this process.

Abhay Karandikar

Okay. Amit, your…

Amit Sheth

Yeah, so I think we’ve gotten great things out like productivity and other things that Kali from Cohit mentioned. About using very large models trained on arbitrary data, we are bringing… We plan to bring to India something very unique. From the very beginning, in fact, when I had a chance to talk to the Prime Minister, we said that we need to have… India make its mark in the particular… in a new form of AI. And in this case, I get the chance to perfectly explain what we are doing. We want to solve, instead of using a big model and use it as an instrument or partner, we are developing models that are very specific. We call it compact custom neurosymbolic models such that we solve specific problem deeply.

IRO has taken the topics of healthcare, sustainability and environmental science and pharma as initial domains. And recently in pharma, there is a company called Benevent AI, and they had FDA approval of a new drug, remote arthritis drug, where it was developed by use of knowledge graph and deep learning. So in our case, we want to create specific model for specific problem, problem solving. And trained, neurosymbolic means that we can make the models explainable, safe, aligned, grounded, with deeper reasoning options and planning and so on and so forth. And so I think this is an alternative model for AI that is likely to come up and would solve the problems deeply, very specifically with high value.

Abhay Karandikar

Okay. Just quickly, I just wanted to ask you this question that what do you think that AI for science can act as a bridge to solve problems in some of the priority sectors, like climate resilience or agriculture or energy, particularly for countries which have a limited experimental facility?

Antoine Petit

I have two hours, right? Yes. No, no. Clearly, as I said before, AI will play a key role in particular because it has this ability to treat a huge amount of data. I said before that… We are also a consumer of AI. If I look at the domains who produce the most amount of data, it’s not at all mathematics, computer science. It’s particle physics and astronomy. And they need new techniques based on AI to treat properly this data. But coming back to North -South relations, as you said, I’m convinced that we need cooperations. We live at a period where sovereignty becomes a buzzword. But sovereignty does not mean, from my point of view, isolation. We need to collaborate.

We need to share. We need to develop open science and open software. And clearly this is not in opposition with the will of sovereignty. And clearly, to be brief, I think that we need to… start from use case either use case coming from civil society or use case coming from science and we as developed countries we do not have as you know France has a history with Africa which is particular and during a long time we try to explain to African people what they need and now we have understood at least I hope that the main point is to understand what all they need and to try to develop cooperation in order to to feel these things so thank you,

Abhay Karandikar

actually you made an important point of the responsible AI what do you think you know that about the shared global ethics you know for the AI that AI driven scientific breakthroughs are governed by some kind of a shared ethical frame

Irakli Beridze

Yes. Okay. Yes. Thanks a lot. So there are not, I mean, many, many things happening at the moment in the world. On the one hand, we have the global digital divide where a lot of countries are investing in the technology and advancing and including in education and scientific breakthroughs. And then you have quite a large portion of the world which is staying either behind or may have a potential to stay behind. For example, right now only half of the world has either AI or digital strategies and have governmental spendings or allocations to that. Another half doesn’t. So that digital divide is very dangerous and there are numerous calls how to minimize that. And on the level of the United Nations, there are many type of streams there, but I don’t think it’s enough and I think that a lot more has to be done.

And hopefully the scientific breakthroughs… through the AI and some shared platforms and some shared collaboration that can be bridged and this could be benefited. And when you see the title of this AI Impact Summit, I cannot share it more or cannot resonate more that welfare of all, happiness for all, AI should certainly benefit all and not selected few. And I think that summits like this and hosting a summit in Global South should give a renewed impetus for doing all of that. Thank you very much.

Abhay Karandikar

Thank you very much. Now since we are running out of time, we just have time for two quick questions. So we can take from here. Yes, please, go ahead.

Audience

So my question is for Dr. Pino and Dr. Kashi. You know, I work at the intersection of AI and synthetic biology. Google defined release Alka -Volume from the mobile site. And then they announced Alka -Volume 4. What is it? Or ground discovery? And we have chosen to get… So it’s very interesting that the fundamental model in fundamental science was released in public domain. But the one which has commercial applications and drug discovery, Google has chosen to keep private. My question is, do you see this as a trend where the scientific foundation models as far as they relate to fundamental science will be released in open source, but if they are fine -tuned for commercial applications, they will be kept private?

Do you see this as a trend, and what do we do about that, Professor Sheth, in India?

Joelle Pineau

Of course I can’t speak to DeepMind’s strategy. That belongs to them. I’ve been in deep disagreement about their open sourcing strategy for many years, respectfully so. I do think that the circulation of scientific assets and ideas is absolutely for the benefit of all. I will say it is possible to go against that trend. I was, in 2023, responsible for a language model called LAMA. At the time, all of these… The industry was against open sourcing large language models. against that. We open source the Lama 1 model, Lama 2, Lama 3. Today we’re looking at over 3 billion downloads of these family of models. It’s possible to see disturbances to those trends and I think specifically in the field of scientific research there’s so much more to be gained by sharing assets and sharing ideas than keeping it closed.

But that takes courage, that is going against the grain and it takes vision.

Amit Sheth

I want to express deep admiration for that approach and trend that you started in making open source model. India has to develop its own model so we just had a whole day yesterday with the pharma industry, they are our partners and with the access to information they can provide, that is they can provide, data they can provide, we will develop our own model for drug discovery. we are ourselves developing a very large pharma knowledge graph we have already developed a good one decent one now and we will be training our own model with deep pharma drug related you know knowledge and our version thank you

Abhay Karandikar

so just one last question we will have in the end just be brief I think 30 seconds and then I will have one of the panelists to answer another 40 seconds

Audience

my question is

Abhay Karandikar

yeah go ahead

Audience

my question is is there any government guidelines for responsible global AI

Abhay Karandikar

any you want to answer this right

Irakli Beridze

so there are numerous guidelines on the responsible use of AI in many different domains from our side the sort of angle of the UN where I am working we did develop guidelines and not only guidelines but practical framework on the responsible use of AI in law enforcement and law enforcement is one of the probably most sensitive applications of artificial intelligence and that guidelines or that toolkit, that practical framework is now unveiled and it’s working and it’s been tested in many countries and as I mentioned it India is one of the first country which is implementing it and it’s very admirable. Thank you. So

Abhay Karandikar

thank you very much. With this I think we are time up and we have to close the session. I would like to thank all the panelists. Thank you. Thank you all. I just would like to give away the mementos for the panel discussion. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (41)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Estelle David of Business France opened the AI Impact Summit, noting that roughly one hundred French companies were present across sectors such as quantum‑ready photonics, secure edge AI, mobility systems, cybersecurity, digital twins and green‑tech.”

The knowledge base states that Estelle David opened the summit by showcasing a French AI delegation of about 100 companies across sectors like quantum computing, cybersecurity and green tech, confirming the reported figure and sector breadth.

Confirmedmedium

“A partnership between H‑Company and St James Hospital in Bangalore was signed during the summit, and a collaboration between North France Invest and the TIAB was also announced.”

Source [S6] explicitly mentions the signature between H-Company and St James Hospital and the partnership between North France Invest and the TIAB, confirming these specific agreements.

Additional Contextmedium

“France now ranks among the world’s top three AI ecosystems (San Francisco, New York and Paris).”

While the ranking is not verified in the knowledge base, the source provides context that France hosts more than 1,100 AI startups and is actively doubling the number of AI scientists and engineers, underscoring its strong AI ecosystem.

Additional Contextlow

“India trains hundreds of thousands of AI engineers each year, giving it the second‑largest developer community in the world.”

Source [S118] reports that India produces about 500,000 AI engineers annually, confirming the scale of India’s AI talent pool referenced in the broader discussion of AI ecosystems.

External Sources (129)
S1
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — The summit’s opening presentations by Estelle David from Business France (the trade and investment agency) and Julie Hug…
S2
Announcement of New Delhi Frontier AI Commitments — -David: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified
S3
Meta’s AI research VP Joelle Pineau announces departure — Joelle Pineau, the Vice President of AI research at Meta,announcedshe will be leaving the company by the end of May, aft…
S4
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — – Amit Sheth- Joelle Pineau – Joelle Pineau- Audience
S5
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — I’m still in my… So thank you, thank you to all of you. Now we are actually arriving to today’s session where we are g…
S6
https://dig.watch/event/india-ai-impact-summit-2026/scaling-trusted-ai_-how-france-and-india-are-building-industrial-innovation-bridges — I’m still in my… So thank you, thank you to all of you. Now we are actually arriving to today’s session where we are g…
S7
https://dig.watch/event/india-ai-impact-summit-2026/scaling-trusted-ai_-how-france-and-india-are-building-industrial-innovation-bridges — I’m still in my… So thank you, thank you to all of you. Now we are actually arriving to today’s session where we are g…
S8
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — -Tanuj Mittal- Senior Director Customer Solution Experience, Dassault Systèmes
S9
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — -Valerian Giesz- Co-Founder and CEO of Candela (quantum computing company)
S10
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — -Antoine Petit- CEO and Chairman, CNRS France (Centre National de la Recherche Scientifique)
S11
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — -Raj Reddy- Professor, founding director of the Robotics Institute at Carnegie Mellon University, 1994 Turing Award winn…
S12
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — -Julie Huguet- Director of LaFrenchTech Mission, supports growth of French startups in France and abroad
S14
Survival Tech Harnessing AI to Manage Global Climate Extremes — -Professor Seth- Referenced in transcript but appears to be referring to Amit Sheth
S15
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — -David Sadek- VP Research Technology and Innovation Global CTUI and Quantum Computing, Thales
S16
https://dig.watch/event/india-ai-impact-summit-2026/scaling-trusted-ai_-how-france-and-india-are-building-industrial-innovation-bridges — I’m still in my… So thank you, thank you to all of you. Now we are actually arriving to today’s session where we are g…
S17
The Future of AI in the Judiciary: Launch of the UNESCO Guidelines for the use of AI Systems in the Judiciary — Dr. Irakli Beridze:Yeah, very quick, 15 seconds. I have two basically comments. One is that it became obvious that, I me…
S18
AI for Good Impact Awards — LJ Rich: It’s a real pleasure to hear from somebody who is behind so much innovation for young people, and I think that …
S20
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S21
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S22
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S23
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — -Arun Sasheesh- Associate Partner and Country Director, TNP Consultants; Panel moderator -Saloni- Session coordinator/m…
S24
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S25
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — – Antoine Petit- Joelle Pineau- Abhay Karandikar – Raj Reddy- Irakli Beridze- Abhay Karandikar
S26
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Ashwini Vaishnaw- Role/Title: Honorable Minister (appears to be instrumental in India’s semiconductor industry developm…
S27
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S28
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S29
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S30
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — – Arun Sasheesh- Tanuj Mittal- Neelakantan Venkataraman – Neelakantan Venkataraman- David Sadek- Valerian Giesz
S31
Is Geopolitical ‘Coopetition’ Possible? — There are major signs of cooperation especially in the medical area and space
S32
Bridging the AI innovation gap — Partnership and Collaboration
S33
UNSC meeting: Artificial intelligence, peace and security — France:- The United Nations- France and international partnerships- Individual countries France:Madam President, I than…
S34
India and France to strengthen digital partnerships — Indian Prime Minister Narendra Modi’s two-day visit to France, where he held discussions with French President Emmanuel …
S35
Inclusive AI_ Why Linguistic Diversity Matters — The France-India partnership exemplified how countries with complementary strengths can collaborate to enhance rather th…
S36
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — She positioned the partnership as combining complementary strengths: India provides scale and speed (the engine), while …
S37
AI That Empowers Safety Growth and Social Inclusion in Action — And these assessments provide a kind of clear -eyed look at how regional landscapes can evolve, inviting us to move beyo…
S38
Conversational AI in low income &amp; resource settings | IGF 2023 — Addressing healthcare inequity requires collaboration and the appropriate use of technology. Inequities exist not only a…
S39
Paris competes for Europe’s AI leadership as major conference approaches — France is set tohosttech executives and political figures this week, including former US Secretary of State John Kerry a…
S40
New report analyses GenAI startups in Europe and Israel — A report published byventure capital firm Accelshows the state of affairs of Europe and Israel’s generative AI (GenAI). …
S41
https://dig.watch/event/india-ai-impact-summit-2026/trusted-connections_-ethical-ai-in-telecom-6g-networks — And on the other side, you have a possibility of generating revenue by providing AI through the telecom network, which P…
S42
https://dig.watch/event/india-ai-impact-summit-2026/agents-of-change-ai-for-government-services-climate-resilience — Yeah, so one of the challenges in this is you can project it too much. It’s an exponential curve. It’s very hard to proj…
S43
Masterclass#1 — State limitations are underscored in the context of cyber threats. The norms, devised by the United Nations and other re…
S44
Defending Truth — What actions do stakeholders need to take to preserve a healthy trust ecosystem?
S45
Scaling AI for Billions_ Building Digital Public Infrastructure — “Because trust is starting to become measurable, right, through provenance, through authenticity, as well as verificatio…
S46
AI Policy Summit Opening Remarks: Discussion Report — The discussion identified several concrete commitments:
S48
Keynote Adresses at India AI Impact Summit 2026 — The discussion revealed significant financial commitments underpinning the partnership. Google announced substantial inv…
S49
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — “And the philosophy here is that AI is a tool which is helping the humankind to make a decision”[28]. “Trust is importan…
S50
AI Meets Agriculture Building Food Security and Climate Resilien — “AI must be transparent, auditable, and explainable”[96]. “Without trust, scale will not happen”[99]. “based on open sta…
S51
Multistakeholder Partnerships for Thriving AI Ecosystems — “And I would say it’s not an innovation gap, it’s a power gap.”[19]. “So all those things need framework and need govern…
S52
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: ≫ Distinguished guests, dear friends, it is a great honor to speak to you today on a topic that is reshapin…
S53
Welcome Address — “strong IT background, dynamic startup ecosystem, make India a natural hub for affordable, scalable, and secure AI solut…
S54
Free Science at Risk? / Davos 2025 — This panel discussion focused on the complex issue of research security and international collaboration in science. The …
S55
WS #462 Bridging the Compute Divide a Global Alliance for AI — The panel discussion revealed both the complexity of addressing global compute access challenges and the potential for m…
S56
How Small AI Solutions Are Creating Big Social Change — Low to moderate disagreement level. The speakers largely agreed on core principles (community-centered approach, partner…
S57
Global AI Policy Framework: International Cooperation and Historical Perspectives — The speakers demonstrate significant consensus on key principles including the need for inclusive governance, building o…
S58
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Larissa Zutter stands out as a senior AI policy advisor, closely studying the socio-economic implications of artificial …
S59
What policy levers can bridge the AI divide? — *Note: This summary is based on a transcript with significant audio quality issues, resulting in some unclear or fragmen…
S60
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Cost reduction in technology deployment Lama Impact Grants program Low cost due to the ability to build on existing mo…
S61
DeepSeek: Some trade-related aspects of the breakthrough  — Although so far proprietary models have predominated in the market, open source has been gaining traction, as noted by Y…
S62
US government seeks input on risks and benefits of Open AI models — The US Department of Commerce’s National Telecommunications and Information Administration (NTIA) is invitingcommentson …
S63
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — Tension between open sourcing fundamental science models versus keeping commercially applicable models private
S64
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion — And questions of how we scale responsibly, how we engender trust in the technology, because in order for AI to be useful…
S65
Democratizing AI Building Trustworthy Systems for Everyone — I think thanks to the contributions from all of those experts. I truly think it is a testament to the industry that we a…
S66
The strategic imperative of open source AI — Meta’s Chief AI Scientist, Yann LeCun, captured this shift clearly. Responding to those who see DeepSeek’s rise as ‘Chin…
S67
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Adham Abouzied presented research showing that open source approaches significantly reduce both development costs and en…
S68
Why science metters in global AI governance — And also mentioned here. So this is where we are suggesting that this could be one way to look at. It’s not that everyth…
S69
AI Safety at the Global Level Insights from Digital Ministers Of — This identifies a critical gap in the science-to-policy pipeline – the need for translational work that converts scienti…
S70
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — ## Introduction and Context Setting Alex Moltzau: Yes, thank you so much. My name is Alex Maltzau. And I work as a seco…
S71
Laying the foundations for AI governance — ### Science-Based Policy as Common Ground Artemis Seaford: So the greatest obstacle, in my opinion, to translating AI g…
S72
Policy Network on Artificial Intelligence | IGF 2023 — Sarayu Natarajan advocates for a context-specific and rule of law approach in dealing with the issue of misinformation a…
S73
Artificial intelligence (AI) – UN Security Council — During the discussions, several key points emerged regarding the dual-edged nature of AI in this context. On one hand, A…
S74
WSIS Action Line C7 E-science: Assessment of progress made over the last 20 years — Such a strategy liberates editors from the financial pressures characteristic of commercial entities, allowing for a con…
S75
Science under siege from AI, integrity of research at risk — AI is rapidlytransformingthe landscape of scientific research, but not always for the better. A growing concern is the p…
S76
AI Policy Summit Opening Remarks: Discussion Report — The discussion identified several concrete commitments:
S77
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — The summit’s opening presentations by Estelle David from Business France (the trade and investment agency) and Julie Hug…
S79
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — Industry representatives provided concrete examples of this collaboration in action. Sanjay Mehrotra from Micron describ…
S80
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion — And questions of how we scale responsibly, how we engender trust in the technology, because in order for AI to be useful…
S81
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — “And the philosophy here is that AI is a tool which is helping the humankind to make a decision”[28]. “Trust is importan…
S82
AI Meets Agriculture Building Food Security and Climate Resilien — But let me emphasize, AI is not a magic. As Honorable PM said in his inaugural session, AI must be built on trusted data…
S83
Multistakeholder Partnerships for Thriving AI Ecosystems — “And I would say it’s not an innovation gap, it’s a power gap.”[19]. “So all those things need framework and need govern…
S84
Welcome Address — “strong IT background, dynamic startup ecosystem, make India a natural hub for affordable, scalable, and secure AI solut…
S85
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — All three industry leaders emphasized the need for collaborative, ecosystem-wide approaches rather than proprietary solu…
S86
Open Forum #33 Building an International AI Cooperation Ecosystem — **Sajid Rahman**, ICANN board member, emphasized that AI’s growth is “unprecedented compared to previous technological w…
S87
WS #462 Bridging the Compute Divide a Global Alliance for AI — The panel discussion revealed both the complexity of addressing global compute access challenges and the potential for m…
S88
What policy levers can bridge the AI divide? — Statement that audience will ‘really enjoy this next panel’ and emphasis on the distinguished nature of the guests
S89
Open Forum #30 High Level Review of AI Governance Including the Discussion — Juha Heikkila: Thank you Yoichi and thank you very much for this invitation. So I think it’s very useful to understand t…
S90
How Small AI Solutions Are Creating Big Social Change — Low to moderate disagreement level. The speakers largely agreed on core principles (community-centered approach, partner…
S91
Artificial intelligence (AI) – UN Security Council — The global focus on Artificial Intelligence (AI) capacity-building efforts has been a significant topic of discussion am…
S92
High-level AI Standards panel — Bilel Jamoussi: Thank you very much, Dr. Cho. Certainly, collaboration, inclusivity, and human-centered standards. Thank…
S93
Opening Remarks (50th IFDT) — The overall tone was formal yet warm and celebratory. Speakers expressed pride in the IFDT’s accomplishments and gratitu…
S94
Governments and Technical Community: A Successful Model of Multistakeholder Collaboration for Achieving the SDGs — The tone was consistently formal, diplomatic, and celebratory throughout the session. It maintained a positive, collabor…
S95
Partner2Connect High-Level Dialogue — The tone was consistently optimistic and collaborative throughout the discussion. It began with celebratory announcement…
S96
Opening of the session — Greece appreciates high-level discussions on cybersecurity, such as those initiated by the Republic of Korea. In an add…
S97
Opening Ceremony — The tone is consistently formal, diplomatic, and optimistic yet cautionary. Speakers maintain a celebratory atmosphere a…
S98
Secure Finance Risk-Based AI Policy for the Banking Sector — The discussion maintained a thoughtful, forward-looking tone throughout, characterized by cautious optimism about AI’s p…
S99
Critical Infrastructure in the Digital Age: From Deep Sea Cables to Orbital Satellites — The discussion maintained a balanced tone that was simultaneously informative and concerning. It began with an education…
S100
Panel 2 – Anticipating and Mitigating Risks Along the Global Subsea Network  — The discussion maintained a professional, collaborative tone throughout, with participants demonstrating technical exper…
S101
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S102
Global Perspectives on Openness and Trust in AI — These key comments fundamentally transformed what could have been a technical discussion about open-source AI into a sop…
S103
Science as a Growth Engine: Navigating the Funding and Translation Challenge — The discussion maintained a consistently thoughtful and collaborative tone throughout. While panelists acknowledged seri…
S104
Collaborative Innovation Ecosystem and Digital Transformation: Accelerating the Achievement of Global Sustainable Development Goals (SDGs) — The discussion maintained a consistently professional, collaborative, and optimistic tone throughout. Speakers demonstra…
S105
Centering People and Planet in the WSIS+20 and beyond — The session explored whether the WSIS vision remains relevant after 20 years and how to address persistent digital inequ…
S106
Trusted Connections_ Ethical AI in Telecom &amp; 6G Networks — The discussion maintained a consistently optimistic and forward-looking tone throughout. Speakers expressed confidence i…
S107
Lift-off for Tech Interdependence? / DAVOS 2025 — The tone of the discussion was generally optimistic and excited about technological progress, while also acknowledging c…
S108
AI Development Beyond Scaling: Panel Discussion Report — The tone began as optimistic and technically focused, with researchers enthusiastically presenting their innovative appr…
S109
When Code and Creativity Collide: AI’s Transformation of Music and Creative Expression — The tone was thoughtful and forward-looking, with both speakers showing cautious optimism rather than fear. Harvey Mason…
S110
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S111
AI Governance Dialogue: Presidential address — The tone remained consistently optimistic and collaborative throughout both presentations. President Karis spoke with co…
S112
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — The discussion maintained a consistently optimistic and collaborative tone throughout. Speakers expressed enthusiasm abo…
S113
DC-BAS: Blockchain Assurance for the Internet We Want and Can Trust — The overall tone was optimistic and forward-looking. Speakers were enthusiastic about the potential of these technologie…
S114
AI for Good Technology That Empowers People — The tone was consistently optimistic and collaborative throughout, with speakers demonstrating genuine enthusiasm for so…
S115
Discussion Report: AI Implementation and Global Accessibility — The tone was consistently optimistic and collaborative throughout the conversation. Both speakers maintained a construct…
S116
AI, Data Governance, and Innovation for Development — The tone of the discussion was largely optimistic and solution-oriented. Speakers acknowledged significant challenges bu…
S117
Keynote-HE Emmanuel Macron — Artificial intelligence Reference to previous address by Antonio Guterres; formal titles and protocol; mention of the A…
S118
https://dig.watch/event/india-ai-impact-summit-2026/keynote-he-emmanuel-macron — India trains hundreds of thousands of AI engineers every year. With 500 ,000 engineers, India has the second largest dev…
S119
Closure of the session — France is spearheading a significant international initiative to develop an action-oriented, state-driven mechanism to b…
S120
Final Report — ITU͛Ɛ Ϯϱ th Anniversary celebrations were graciously supported by the Kingdom of Saudi Arabia (Platinum spon…
S121
High-Level Track Facilitators Summary and Certificates — She emphasizes the importance of partnerships and acknowledges various stakeholders including UN partners, co-organizers…
S122
The WSIS welcome Part I: Meet the Movers Behind It — Tomas Lamanauskas:So thank you very, very much, Rob. So let’s give a round of applause of all our partners. And indeed y…
S123
Agenda item 6 — Gratitude extended to sponsors supporting women in cybersecurity. In closing, acknowledgment was given to international…
S124
Economic Diplomacy: India’s Experience — 2 This was affirmed by the ambassadors and high commissioners of France, Germany, Singapore and the UK at meetings held …
S125
https://dig.watch/event/india-ai-impact-summit-2026/founders-adda-raw-conversations-with-indias-top-ai-pioneers — So for example, anything and everything that is required we are basically making the entire suite of the… automation l…
S126
Space for Sustainable Development — In a high-level dialogue on “space for sustainable development,” with a particular focus on connectivity, six distinguis…
S127
Space Diplomacy: Exploring New Opportunities – ADF 2024 — The International Space Station (ISS) is a prime example of international cooperation in space. The forum on space and …
S129
Opening of the session — France: Thank you, Mr. Chair. My delegation aligns itself with the statement delivered by the European Union, and we w…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
E
Estelle David
3 arguments118 words per minute742 words374 seconds
Argument 1
Partnership deals across AI, space, and healthcare illustrate deep cooperation (Estelle David)
EXPLANATION
Estelle highlighted a series of signed agreements made during the summit that span artificial intelligence, satellite propulsion, and hospital collaborations, demonstrating concrete outcomes of Franco‑Indian cooperation. These deals show that the partnership goes beyond rhetoric to real joint projects and investments.
EVIDENCE
She listed a strategic AI partnership between Dacia Technology and GT Solved, a major contract between ExoTrail and Druva Space for 14 satellite propulsion systems, a collaboration between H-Company and St. James Hospital, as well as joint initiatives between North France Invest and TIAB and the T-U-B partnership, all signed during the event [8-13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The summit highlighted multiple signed agreements in AI, satellite propulsion and healthcare, confirming deep Franco-Indian cooperation [S1]; medical and space collaborations were specifically noted as major signs of partnership [S31]; and a broader digital partnership agenda was outlined in the India-France agreement [S34].
MAJOR DISCUSSION POINT
Franco‑Indian partnership deals
Argument 2
Broad participation of French AI companies across diverse sectors demonstrates the depth and breadth of France’s AI ecosystem.
EXPLANATION
Estelle notes that around one hundred French companies attended the summit, covering areas such as quantum‑ready photonics, secure edge AI, mobility systems, cybersecurity, digital twins and green technologies, illustrating the wide‑ranging expertise within France.
EVIDENCE
She states that “Altogether, it was about 100 French companies … you can find in different sectors like quantum-ready photonics, secure edge AI, mobility systems, cybersecurity, digital twin, and green tech” [4-5].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Estelle’s opening remarks described a delegation of about 100 French firms spanning quantum-ready photonics, secure edge AI, mobility, cybersecurity, digital twins and green tech, illustrating sectoral breadth [S1].
MAJOR DISCUSSION POINT
Diverse French AI sector representation
Argument 3
Collaboration with Business France and partner networks is crucial for mobilising French AI champions in India.
EXPLANATION
Estelle credits the collective network of Business France, LaFrenchTech, Numium and other partners for enabling the successful presence of French startups at the summit, highlighting the importance of coordinated institutional support.
EVIDENCE
She acknowledges “the strength of our collective network and Business France … we have collaborated very closely with different partners with definitely LaFrenchTech and … Numium … the co-organiser of this event, the Franco-Thai Chamber of Commerce, Indo-French Chamber of Commerce, IFKI” [14].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The event’s description emphasizes the role of Business France together with LaFrenchTech, Numium and chambers of commerce in enabling French startups’ presence [S1].
MAJOR DISCUSSION POINT
Role of institutional networks in AI partnership
J
Julie Huguet
4 arguments128 words per minute624 words291 seconds
Argument 1
Complementary strengths—French deep‑tech excellence and Indian scale—fuel shared‑value partnerships (Julie Huguet)
EXPLANATION
Julie argued that France’s deep‑tech capabilities combined with India’s massive market and engineering capacity create a powerful synergy for AI collaboration. This complementarity enables joint innovation, investment attraction, and the scaling of French startups in India.
EVIDENCE
She cited France’s ranking among the top three global AI ecosystems, the presence of leading French AI firms such as Mistral AI and H-Company, the French President’s announcement of a hospital-AI partnership, and India’s scale of 1.4 billion people and 200 000 startups, emphasizing the powerful complementarity of French expertise and Indian scale [39-65].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of the France-India AI partnership stress the complementary nature of French deep-tech and India’s massive market and engineering capacity [S35]; a similar view is expressed in a keynote on trusted AI that highlights India’s scale as the “engine” and France’s precision as the “filter” [S36].
MAJOR DISCUSSION POINT
French‑Indian complementary strengths
Argument 2
AI drives innovation in healthcare, agriculture, climate, grounded in shared values (Julie Huguet)
EXPLANATION
Julie emphasized that AI is being applied to critical sectors such as health, agriculture and climate, reflecting shared values of trust, low environmental footprint and positive societal impact. She presented concrete examples of French‑Indian collaborations that aim to improve lives and the planet.
EVIDENCE
She mentioned the French President’s announcement of a partnership between H-Company and St. James Hospital to make hospitals more efficient, and described AI-driven initiatives in healthcare, agriculture and climate that embody common values of trust and sustainability [46-52].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Case studies on AI-enabled healthcare in low-resource settings illustrate how AI is used to improve health outcomes while respecting equity and sustainability values [S38].
MAJOR DISCUSSION POINT
AI for societal impact
Argument 3
France has risen to become one of the world’s top three AI ecosystems, underscoring its growing global influence.
EXPLANATION
Julie cites a ranking that places Paris alongside San Francisco and New York as a leading AI hub, signalling France’s emergence as a major player in AI research and industry.
EVIDENCE
She reports that “according to Deal Room, the top three AI ecosystems globally are now San Francisco, New York, and Paris” [39-40].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Reports on Paris’s AI leadership place it alongside San Francisco and New York as a top global AI hub [S39]; a separate analysis of European generative-AI startups also highlights France’s prominent position [S40].
MAJOR DISCUSSION POINT
France’s global AI standing
Argument 4
Key French AI leaders such as Mistral AI and H‑Company exemplify the country’s deep‑tech strength and ambition.
EXPLANATION
Julie mentions prominent French AI firms to illustrate the nation’s capacity for cutting‑edge AI development and its ambition to lead in the field.
EVIDENCE
She notes “We already have major European leaders such as Mistral AI or H-Company” [42].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Coverage of France’s AI ecosystem frequently cites Mistral AI and H-Company as flagship deep-tech firms driving innovation [S39].
MAJOR DISCUSSION POINT
Prominent French AI companies
N
Neelakantan Venkataraman
2 arguments156 words per minute1114 words428 seconds
Argument 1
Trust must be baked into every layer of the stack and meet regulatory standards (Neelakantan Venkataraman)
EXPLANATION
Neelakantan explained that trust cannot be an afterthought; it must be embedded at each architectural layer of AI systems and comply with regulations such as India’s DPDP and the EU AI Act. This foundational trust is essential for moving from pilots to production at scale.
EVIDENCE
He described trust as “I have your back and I will not fail you”, insisting it be built into the stack, data lineage, explainability, auditability and zero-trust networking, and noted that regulatory guidance has shifted from soft guidance to concrete policies like DPDP and the EU AI Act [130-143].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Recent discussions on trustworthy AI stress that trust is now measurable through provenance, authenticity and verification, and must be embedded across the stack to satisfy regulations such as DPDP and the EU AI Act [S45]; broader trust-ecosystem frameworks also underline this requirement [S44].
MAJOR DISCUSSION POINT
Embedded trust in AI architecture
DISAGREED WITH
Arun Sasheesh, David Sadek, Tanuj Mittal
Argument 2
An ecosystem partnership model is needed to preserve trust across sectors (Neelakantan Venkataraman)
EXPLANATION
He argued that no single organization can ensure trust alone; a collaborative ecosystem of partners is required to maintain a consistent trust architecture across different domains. This ecosystem approach leverages joint security and compliance components.
EVIDENCE
He stated that “we can’t do it all” and highlighted partnerships such as with Thales for security components, emphasizing the need for an ecosystem to keep trust intact [253-257].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A masterclass on AI governance advocates a collaborative ecosystem model for effective enforcement of norms and standards [S43]; trust-ecosystem literature further stresses the need for multi-partner arrangements to maintain consistent trust guarantees [S44].
MAJOR DISCUSSION POINT
Ecosystem mindset for trust
V
Valerian Giesz
2 arguments132 words per minute541 words244 seconds
Argument 1
Quantum‑AI trust rests on traceability, predictability, verifiability, security, and accountability (Valerian Giesz)
EXPLANATION
Valerian outlined five pillars that define trust for quantum‑AI systems: the ability to trace data and models, predict system limits, verify performance, ensure security, and maintain clear accountability across the value chain. These pillars are necessary to move quantum technologies from the lab to real‑world deployments.
EVIDENCE
He listed “trustability”, traceability, predictability, verifiability, security and accountability as essential, and described Candela’s MERLIN benchmarking framework that provides reproducible runs and performance validation [162-179].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust frameworks for emerging AI technologies identify traceability, predictability, verifiability, security and accountability as core pillars for moving quantum-AI from lab to production [S44].
MAJOR DISCUSSION POINT
Quantum‑AI trust pillars
Argument 2
Breaking walls between quantum and AI and sharing benchmarks creates a trustworthy community (Valerian Giesz)
EXPLANATION
Valerian advocated for dismantling silos between quantum computing and AI, proposing shared benchmarking tools to foster a common baseline and community trust. By releasing the MERLIN framework, Candela aims to establish reproducible standards that both French and Indian researchers can use.
EVIDENCE
He explained the release of MERLIN to benchmark quantum machine-learning applications, its use for reproducibility, and the goal of building a shared community between France and India [259-267].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same trust-ecosystem literature calls for dismantling silos between quantum computing and AI, and for shared benchmarking tools to foster a common baseline and community trust [S44].
MAJOR DISCUSSION POINT
Collaboration between quantum and AI
D
David Sadek
3 arguments128 words per minute555 words258 seconds
Argument 1
Trust is demonstrated through friendly hacking, explainability, and ethical responsibility (David Sadek)
EXPLANATION
David described how Thales validates AI systems by actively attacking them (friendly hacking), ensuring they can explain their decisions, and embedding ethical and regulatory responsibilities. These practices turn trust from a promise into provable evidence.
EVIDENCE
He recounted a “friendly hacking” team that identified vulnerabilities, gave an example of a digital copilot needing to explain a maneuver, and highlighted responsibility through compliance with the EU AI Act, carbon-footprint reduction and AI-for-green initiatives, concluding that trust must be proved [188-197].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Best-practice guides for trustworthy AI highlight proactive “friendly hacking”, explainability and compliance with ethical and regulatory mandates as concrete ways to prove trustworthiness [S45].
MAJOR DISCUSSION POINT
Operational trust mechanisms
DISAGREED WITH
Arun Sasheesh, Neelakantan Venkataraman, Tanuj Mittal
Argument 2
Combining French depth with Indian speed provides the foundation for trusted AI (David Sadek)
EXPLANATION
David noted that France has spent decades building highly reliable, certified AI systems for critical sectors, while India has rapidly deployed digital infrastructure at massive scale. The synergy of French depth and Indian speed can overcome trust challenges for large‑scale AI adoption.
EVIDENCE
He contrasted France’s long-term certification and proof-based trust culture with India’s fast-moving digital infrastructure, arguing that their combination is essential for scaling AI responsibly [272-275].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of the France-India partnership note that France contributes deep, certified AI expertise while India offers rapid, large-scale deployment capabilities, a synergy likened to “precision” plus “scale” [S36]; complementary-strength discussions also underline this blend of depth and speed [S35].
MAJOR DISCUSSION POINT
France‑India complementary capabilities
Argument 3
Responsibility includes ethics, carbon‑footprint reduction, and proof‑based trust (David Sadek)
EXPLANATION
David emphasized that responsible AI must address ethical principles, minimize energy consumption, and provide demonstrable proof of trustworthiness. Initiatives such as frugal AI and AI‑for‑green illustrate how environmental stewardship is part of responsible AI.
EVIDENCE
He described efforts to reduce data volume for training, develop frugal AI, and apply AI to lower aircraft emissions, linking these actions to the broader responsibility agenda [194-198].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Responsible-AI reports stress the importance of ethical design, frugal AI and carbon-footprint reduction as integral to proof-based trust frameworks [S37]; trust measurement literature also links these dimensions to demonstrable trust metrics [S45].
MAJOR DISCUSSION POINT
Ethical and environmental responsibility
S
Sandeep Kumar Saxena
3 arguments142 words per minute687 words289 seconds
Argument 1
Leadership‑driven AI adoption and iterative learning build organisational trust (Sandeep Kumar Saxena)
EXPLANATION
Sandeep argued that AI adoption must start at the top, with leaders modelling AI‑enabled decision‑making, and that trust is built gradually through iterative learning and certification of staff. This top‑down approach creates a culture where AI is trusted and widely used.
EVIDENCE
He explained that his AI-driven sales and forecasting tools are used by himself and his teams, that every employee is AI-certified, and that trust grows over time through patient, continuous learning [215-218].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust-building studies emphasize top-down leadership, iterative learning and staff certification as key levers for cultivating organisational confidence in AI systems [S45].
MAJOR DISCUSSION POINT
Leadership and iterative trust building
Argument 2
Openness and adaptability are essential for embracing AI change (Sandeep Kumar Saxena)
EXPLANATION
He stressed that organisations need to be open‑minded and adaptable, learning from both French and Indian practices, to successfully integrate AI. Flexibility and willingness to change are key to staying competitive.
EVIDENCE
He noted the contrast between French scheduling and Indian flexibility, urging openness and adaptability, and later summed up with the phrase “just be open-minded and learn to adopt change” [65-68][277-279].
MAJOR DISCUSSION POINT
Adaptability for AI adoption
Argument 3
AI solutions for citizens—fraud detection, compliance, training, skilling—enhance public welfare (Sandeep Kumar Saxena)
EXPLANATION
Sandeep presented a portfolio of AI‑powered solutions aimed at everyday citizens, including fraud detection, compliance monitoring, and skill‑building tools, illustrating how AI can directly improve public services and safety.
EVIDENCE
He listed specific AI products such as fraud detection systems, compliance monitoring, training and skilling platforms that are being showcased at the summit for citizen-level impact [221-224].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Examples of AI-enabled public-service tools for fraud detection, compliance monitoring and skill-building illustrate how AI can directly improve citizen welfare, as discussed in healthcare-AI equity case studies [S38].
MAJOR DISCUSSION POINT
AI for public welfare
A
Arun Sasheesh
1 argument124 words per minute652 words312 seconds
Argument 1
Trust is the only way to achieve large‑scale AI adoption (Arun Sasheesh)
EXPLANATION
Arun asserted that without trust from corporations, banks and governments, AI cannot be deployed at the scale needed for societal transformation. Trust therefore becomes the prerequisite for any large‑scale AI rollout.
EVIDENCE
He linked the Indian public’s trust in UPI to its scaling, repeated that “trust is the only way to scale”, and emphasized that large organisations will adopt AI only when they trust it [86-94].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust-measurement frameworks argue that large-scale AI rollout depends on quantifiable trust signals such as provenance and verification, echoing the claim that trust is a prerequisite for scaling [S45]; ecosystem-trust literature reinforces this point [S44].
MAJOR DISCUSSION POINT
Trust as prerequisite for scale
DISAGREED WITH
Neelakantan Venkataraman, David Sadek, Tanuj Mittal
T
Tanuj Mittal
2 arguments134 words per minute745 words332 seconds
Argument 1
Trust evolves to data lineage, human‑in‑the‑loop oversight, simulation, and end‑to‑end validation (Tanuj Mittal)
EXPLANATION
Tanuj described how the notion of trust has progressed from simple accuracy to comprehensive data provenance, continuous human oversight, realistic simulation of AI outputs, and full lifecycle validation. These layers are now required for industrial AI acceptance.
EVIDENCE
He explained the shift from accuracy-only models to requirements for ethical data lineage, people-in-the-loop governance, virtual twin simulations, built-in compliance checks, and end-to-end validation before deployment [227-245].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Emerging AI governance models describe a progression from simple accuracy to full data lineage, human-in-the-loop governance, virtual-twin simulation and end-to-end validation as essential trust components [S45]; these align with the identified trust pillars [S44].
MAJOR DISCUSSION POINT
Evolution of trust in industrial AI
DISAGREED WITH
Arun Sasheesh, Neelakantan Venkataraman, David Sadek
Argument 2
Trust drives massive user adoption, as shown by UPI’s nationwide uptake (Tanuj Mittal)
EXPLANATION
He used India’s Unified Payments Interface (UPI) as a case study, showing that widespread public trust enabled billions of transactions and adoption even among digitally illiterate users, illustrating the link between trust and scale.
EVIDENCE
He cited UPI’s 21 billion transactions worth 30 lakh crore in a year and its use by even the most digitally illiterate citizens, arguing that trust directly fuels scale [281-283].
MAJOR DISCUSSION POINT
Trust leading to scale (UPI example)
A
Abhay Karandikar
1 argument123 words per minute858 words418 seconds
Argument 1
AI can compress decades of research, but equitable access and inclusion are critical (Abhay Karandikar)
EXPLANATION
Abhay highlighted AI’s potential to accelerate scientific discovery dramatically, yet warned that benefits must be shared globally to avoid widening the digital divide. Inclusive access to AI tools and data is essential for equitable progress.
EVIDENCE
He noted that AI can turn decades of research into months, but emphasized that many regions still face barriers to AI adoption, stressing the need for equitable distribution and inclusion [369-372].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Inclusive-AI analyses stress that while AI can accelerate scientific discovery, ensuring equitable access and preventing digital divides are essential for responsible deployment [S35].
MAJOR DISCUSSION POINT
AI acceleration vs. equitable access
A
Amit Sheth
2 arguments130 words per minute1046 words480 seconds
Argument 1
IRO builds high‑end talent and compact neurosymbolic models for domain‑specific breakthroughs (Amit Sheth)
EXPLANATION
Amit described the Indian Research Organization’s (IRO) strategy of cultivating top‑tier talent and developing compact, neurosymbolic AI models tailored to specific sectors such as healthcare, sustainability and pharma. This approach aims to produce high‑impact, domain‑focused breakthroughs.
EVIDENCE
He recounted IRO’s talent pipeline, collaborations with universities and industry, and the creation of compact neurosymbolic models for pharma, citing examples like the Benevent AI FDA-approved arthritis drug developed via knowledge graphs [386-440][561-573].
MAJOR DISCUSSION POINT
Talent and neurosymbolic AI for breakthroughs
Argument 2
IRO develops open knowledge graphs and custom models for pharma, emphasizing openness (Amit Sheth)
EXPLANATION
Amit emphasized IRO’s commitment to open science by building a large pharma knowledge graph and training proprietary models that remain open, fostering transparency and collaboration in drug discovery.
EVIDENCE
He stated that IRO is creating its own pharma knowledge graph and will train a custom model for drug discovery, underscoring the open-source ethos [630-633].
MAJOR DISCUSSION POINT
Open knowledge graphs for pharma
A
Antoine Petit
2 arguments135 words per minute1028 words456 seconds
Argument 1
CNRS’s AI‑for‑Science virtual centre promotes interdisciplinary cooperation and warns of AI‑generated false papers (Antoine Petit)
EXPLANATION
Antoine explained that CNRS has launched a virtual AI‑for‑Science centre to foster collaboration across disciplines, but cautioned that AI‑generated papers risk polluting scientific literature if not properly vetted.
EVIDENCE
He described the virtual centre’s role in linking AI producers and consumers, the need for interdisciplinary loops, and warned that AI can produce false papers that waste researchers’ time [444-484].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UN discussions on AI and security highlight the risk of AI-generated false scientific papers and the need for interdisciplinary safeguards, mirroring the concerns raised about the CNRS virtual centre [S33].
MAJOR DISCUSSION POINT
AI‑for‑Science virtual centre & false paper risk
DISAGREED WITH
Other panelists (implicit)
Argument 2
AI‑generated false scientific papers pose risks, demanding ethical safeguards (Antoine Petit)
EXPLANATION
He reiterated the danger that AI‑generated manuscripts could undermine scientific integrity, calling for ethical safeguards and rigorous validation to prevent misinformation in academia.
EVIDENCE
He highlighted the specific risk that AI can generate large numbers of papers that may be incorrect, leading to wasted effort and potential misinformation [479-482].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same UN-AI briefing underscores the potential for AI-generated misinformation in academia and calls for ethical safeguards and rigorous validation [S33].
MAJOR DISCUSSION POINT
Risk of AI‑generated false papers
J
Joelle Pineau
2 arguments171 words per minute836 words291 seconds
Argument 1
Reproducibility requires transparent artifact sharing and standardized evaluation criteria (Joelle Pineau)
EXPLANATION
Joelle argued that to ensure AI‑generated scientific results are trustworthy, researchers must make code, data and models publicly available and agree on clear evaluation metrics. Transparency and standardized benchmarks are essential for reproducibility.
EVIDENCE
She discussed her work on reproducibility challenges, emphasizing the need for publicly available artifacts and well-defined evaluation criteria to enable reliable replication of AI research [548-558].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust-ecosystem literature stresses that reproducibility hinges on open sharing of code, data and models together with clear evaluation metrics [S44].
MAJOR DISCUSSION POINT
Transparency and evaluation for reproducibility
DISAGREED WITH
Antoine Petit
Argument 2
Open‑sourcing large models accelerates progress despite industry resistance (Joelle Pineau)
EXPLANATION
Joelle highlighted that releasing large language models to the public, as she did with the LAMA series, can dramatically increase adoption and scientific progress, even though many industry players oppose open‑sourcing.
EVIDENCE
She recounted the open-source release of LAMA 1-3, noting over three billion downloads and arguing that openness benefits scientific research despite resistance from commercial entities [618-628].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of European generative-AI startups note that open releases of foundational models drive rapid adoption and scientific progress, even as commercial entities resist openness [S40].
MAJOR DISCUSSION POINT
Open‑source large models
DISAGREED WITH
Audience
A
Audience
1 argument166 words per minute158 words56 seconds
Argument 1
Trend: open scientific foundation models versus closed commercial fine‑tuned models (Audience)
EXPLANATION
An audience member observed a growing pattern where foundational AI models are released openly for research, while versions fine‑tuned for commercial applications remain proprietary, raising concerns about accessibility and equity.
EVIDENCE
The question referenced the release of foundational models like Google’s Alpha-Volume in the public domain contrasted with commercial fine-tuned versions kept private, asking whether this trend will continue and its implications [608-617].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Recent reports on GenAI startups observe a growing pattern where foundational models are released openly for research while fine-tuned commercial versions remain proprietary [S40].
MAJOR DISCUSSION POINT
Open vs. proprietary model trend
DISAGREED WITH
Joelle Pineau
I
Irakli Beridze
1 argument162 words per minute1140 words421 seconds
Argument 1
UN‑backed toolkit offers responsible AI guidelines for law enforcement and tackles the digital divide (Irakli Beridze)
EXPLANATION
Irakli described the United Nations’ development of a practical framework and guidelines for the responsible use of AI in law enforcement, which has already been piloted in several countries including India, aiming to bridge the digital divide and ensure ethical AI deployment.
EVIDENCE
He explained UNICRI’s mandate, the creation of toolkits for responsible AI, implementation in five countries (India, Kazakhstan, Nigeria, Oman, Brazil), and recent policy recommendations to improve public trust in AI-enabled law enforcement [507-541][637-642].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UN-led initiatives on responsible AI for law enforcement have produced practical toolkits piloted in several countries, aiming to bridge the digital divide and ensure ethical deployment [S33].
MAJOR DISCUSSION POINT
UN responsible AI toolkit for law enforcement
R
Raj Reddy
1 argument113 words per minute950 words502 seconds
Argument 1
Multilingual AGI, personal sovereign edge models, and humane weapons aim to benefit the bottom of the pyramid (Raj Reddy)
EXPLANATION
Raj called for measurable progress toward multilingual AI assistants that work in local languages, personal edge AI models that preserve privacy, and the development of humane AI‑enabled weapons that protect civilians, all targeted at improving lives of the most vulnerable.
EVIDENCE
He cited startups working on multilingual interfaces, the need for a quantitative matrix to assess progress, the vision of personal sovereign edge models that keep data private, and the concept of humane weapons that disable rather than destroy targets, emphasizing benefits for the poorest [296-304][309-324][342-346].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Inclusive-AI research highlights the importance of multilingual AI assistants for low-resource languages and edge-centric models that preserve privacy, aligning with the vision of bottom-of-the-pyramid impact [S35]; discussions on AI-enabled edge solutions emphasize the “engine-filter” analogy of Indian scale and French precision [S36].
MAJOR DISCUSSION POINT
Inclusive, ethical AI for the underserved
M
Moderator
3 arguments39 words per minute525 words805 seconds
Argument 1
LaFrenchTech is a leading European innovation ecosystem that represents thousands of deep‑tech companies and scale‑ups, making it pivotal for Europe’s technological leadership.
EXPLANATION
The moderator highlights Julie Rouget’s role as director of LaFrenchTech and emphasizes that the organisation brings together a vast number of deep‑tech firms, positioning Europe at the forefront of technology development.
EVIDENCE
During the opening, the moderator introduces Julie Rouget as director of the French Tech mission and notes that she leads “one of the world’s most dynamic innovation ecosystems … representing thousands of deep-tech companies and scale-ups shaping Europe’s technological leadership” [31].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Coverage of Paris’s AI leadership notes that LaFrenchTech aggregates thousands of deep-tech firms and scale-ups, positioning Europe at the forefront of technological development [S39]; broader European AI ecosystem analyses also underline France’s central role [S40].
MAJOR DISCUSSION POINT
Importance of LaFrenchTech ecosystem
Argument 2
A high‑level, cross‑sector panel is essential for France and India to jointly accelerate trusted AI across multiple domains.
EXPLANATION
The moderator frames the upcoming panel as a platform where leaders from telecom, quantum, industrial AI, cloud infrastructure and digital transformation will discuss how the two countries can work together to build trust in AI systems.
EVIDENCE
The moderator announces the panel’s purpose: “to reflect on how our two countries can jointly accelerate trusted AI across sectors” and lists the sectors that will be covered, such as telecom, quantum and industrial AI [74-75].
MAJOR DISCUSSION POINT
Joint acceleration of trusted AI
Argument 3
The AI for Science panel brings together a diverse set of international experts to explore AI’s role in accelerating scientific discovery and fostering global cooperation.
EXPLANATION
By introducing the AI for Science session and its distinguished panelists, the moderator underscores the significance of AI as a tool for scientific research and the need for collaborative, cross‑national efforts.
EVIDENCE
The moderator announces the next session, describing it as “a panel discussion on AI for science” and lists the expert panelists, emphasizing the importance of AI in scientific advancement and international collaboration [351-358].
MAJOR DISCUSSION POINT
AI for scientific acceleration and cooperation
Agreements
Agreement Points
Similar Viewpoints
Unexpected Consensus
Differences
Different Viewpoints
Open‑source foundational models versus proprietary fine‑tuned commercial models
Speakers: Audience, Joelle Pineau
Trend: open scientific foundation models versus closed commercial fine‑tuned models (Audience) Open‑sourcing large models accelerates progress despite industry resistance (Joelle Pineau)
An audience member warned that while foundational AI models are increasingly released openly, the versions that are fine-tuned for commercial use remain proprietary, raising concerns about accessibility and equity [608-617]. Joelle responded that open-sourcing large language models (e.g., the LAMA series) dramatically increases adoption and scientific progress, even though many industry players oppose it, arguing that openness benefits the whole community [618-628]. The two positions clash over whether openness should be the default for both foundational and commercial AI assets.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions highlight the trade-off between openness, cost reduction, and transparency versus commercial control; governments (e.g., NTIA) are soliciting input on risks and benefits of open foundation models, while industry leaders argue open-source models are surpassing proprietary ones [S60][S62][S63][S66].
How to achieve trustworthy AI at scale
Speakers: Arun Sasheesh, Neelakantan Venkataraman, David Sadek, Tanuj Mittal
Trust is the only way to achieve large‑scale AI adoption (Arun Sasheesh) Trust must be baked into every layer of the stack and meet regulatory standards (Neelakantan Venkataraman) Trust is demonstrated through friendly hacking, explainability, and ethical responsibility (David Sadek) Trust evolves to data lineage, human‑in‑the‑loop oversight, simulation, and end‑to‑end validation (Tanuj Mittal)
Arun argued that without trust, AI cannot scale, positioning trust as the prerequisite for any large-scale rollout and citing the UPI example as proof of trust-driven scaling [86-94]. Neelakantan emphasized that trust must be embedded architecturally across all layers and aligned with regulations such as DPDP and the EU AI Act, treating it as a technical and compliance requirement [130-143]. David described operational trust as proof-based, using friendly-hacking exercises, explainability, and responsibility (including carbon-footprint reduction) to demonstrate trustworthiness [188-197]. Tanuj traced the evolution of trust from simple accuracy to comprehensive data lineage, human oversight, virtual-twin simulation, and full lifecycle validation before deployment [227-245]. While all agree trust is essential, they disagree on the primary mechanism: cultural prerequisite, architectural embedding, proof-based testing, or lifecycle governance.
POLICY CONTEXT (KNOWLEDGE BASE)
Building trusted AI at scale is framed as a cornerstone for responsible deployment, with panels emphasizing digital sovereignty, scaling responsibly, and multi-stakeholder collaboration to engender trust in AI systems [S64][S65][S63].
Managing AI‑generated scientific outputs: risk of false papers versus reproducibility solutions
Speakers: Antoine Petit, Joelle Pineau
CNRS’s AI‑for‑Science virtual centre promotes interdisciplinary cooperation and warns of AI‑generated false papers (Antoine Petit) Reproducibility requires transparent artifact sharing and standardized evaluation criteria (Joelle Pineau)
Antoine highlighted that AI can produce large numbers of scientific papers, many of which may be incorrect, risking wasted effort and misinformation in the literature [479-482]. Joelle countered that reproducibility challenges-through open sharing of code, data, and clear evaluation metrics-can mitigate such risks and actually accelerate trustworthy scientific discovery [548-558]. The disagreement lies in whether the primary concern is the prevalence of false outputs or the establishment of transparent, standardized processes to ensure reliability.
POLICY CONTEXT (KNOWLEDGE BASE)
Reports warn that AI-generated errors threaten research integrity, prompting calls for reproducibility frameworks and e-science strategies to safeguard scientific outputs [S75][S74][S68][S73].
Whether a dedicated AI‑for‑Science platform is needed
Speakers: Antoine Petit, Other panelists (implicit)
CNRS’s AI‑for‑Science virtual centre promotes interdisciplinary cooperation and warns of AI‑generated false papers (Antoine Petit) Discussion on building a platform was left open, with no consensus reached (implicit from panel flow)
When asked if a dedicated AI-for-Science platform is required, Antoine expressed uncertainty, noting that while cooperation is essential, he was not convinced a single platform is the answer [471-473]. Other speakers (e.g., David, Joelle) discussed tools and frameworks but did not commit to a unified platform, indicating a lack of agreement on the structural solution for AI-driven scientific research.
POLICY CONTEXT (KNOWLEDGE BASE)
The WSIS Action Line on e-science advocates dedicated infrastructure to support reproducibility and reduce commercial pressures, suggesting a policy rationale for a specialized AI-for-Science platform [S74][S68].
Unexpected Differences
Open‑source versus proprietary AI models in the context of scientific research
Speakers: Audience, Joelle Pineau
Trend: open scientific foundation models versus closed commercial fine‑tuned models (Audience) Open‑sourcing large models accelerates progress despite industry resistance (Joelle Pineau)
The audience’s concern that commercial fine‑tuned models will remain closed, limiting equitable access, was not directly addressed by other panelists and contrasts with Joelle’s strong advocacy for open‑sourcing large models. This divergence was unexpected because most participants focused on trust, governance, or collaboration rather than the openness of model releases.
POLICY CONTEXT (KNOWLEDGE BASE)
The open-source vs. proprietary debate extends to scientific research, where openness is linked to reproducibility and transparency, while proprietary models raise concerns about control and dual-use risks [S60][S63][S66][S74].
Severity of AI‑generated false scientific papers
Speakers: Antoine Petit, Joelle Pineau
CNRS’s AI‑for‑Science virtual centre warns of AI‑generated false papers (Antoine Petit) Reproducibility requires transparent artifact sharing and standardized evaluation criteria (Joelle Pineau)
Antoine emphasizes the risk that AI‑generated papers could flood the literature with incorrect results, a problem he frames as a major threat. Joelle, while acknowledging reproducibility challenges, focuses on solutions (transparency, benchmarks) and does not treat the risk as a crisis. The difference in perceived severity and priority of the issue was not anticipated given the overall collaborative tone of the summit.
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of AI safety and research integrity identify AI-generated fake papers as a high-severity threat, calling for governance measures to mitigate misinformation in the scholarly record [S75][S73][S72].
Overall Assessment

The panelists largely converged on the importance of trust, collaboration, and the complementary strengths of France and India for AI advancement. Disagreements centered on the mechanisms to achieve trustworthy AI (cultural prerequisite vs. architectural embedding vs. proof‑based testing), the openness of AI models (open‑source versus proprietary), and the handling of AI‑generated scientific outputs (risk of false papers versus reproducibility frameworks). These divergences are substantive but not antagonistic, reflecting different professional lenses (policy, engineering, research) rather than fundamental conflict.

Moderate – while there is clear consensus on high‑level goals (trusted AI, France‑India partnership, societal impact), the speakers differ on implementation pathways and policy nuances. The implications are that coordinated action will require reconciling these approaches—e.g., integrating regulatory compliance, technical safeguards, open‑source incentives, and reproducibility standards—to build a unified, trustworthy AI ecosystem across both nations.

Partial Agreements
All these speakers share the overarching goal of building trustworthy AI that can be deployed at national or global scale and of strengthening France‑India collaboration. However, they diverge on the pathways: Arun treats trust as the prerequisite for scaling; Neelakantan stresses architectural embedding and regulatory compliance; David focuses on proof‑based testing and ethical responsibility; Tanuj highlights data lineage and simulation; Julie and Estelle emphasize complementary national strengths and concrete partnership deals as the engine for trust‑enabled scaling. The consensus on the goal coexists with differing strategic emphases.
Speakers: Arun Sasheesh, Neelakantan Venkataraman, David Sadek, Tanuj Mittal, Julie Huguet, Estelle David
Trust is the only way to achieve large‑scale AI adoption (Arun Sasheesh) Trust must be baked into every layer of the stack and meet regulatory standards (Neelakantan Venkataraman) Trust is demonstrated through friendly hacking, explainability, and ethical responsibility (David Sadek) Trust evolves to data lineage, human‑in‑the‑loop oversight, simulation, and end‑to‑end validation (Tanuj Mittal) Complementary strengths—French deep‑tech excellence and Indian scale—fuel shared‑value partnerships (Julie Huguet) Partnership deals across AI, space, and healthcare illustrate deep Franco‑Indian cooperation (Estelle David)
Both speakers agree that AI should serve societal needs and improve public welfare. Julie frames this through sector‑wide impact (health, agriculture, climate) and shared values, while Sandeep illustrates concrete citizen‑facing AI products (fraud detection, compliance, skilling). They differ in focus—strategic sectoral vision versus specific service‑level applications—but share the same overarching objective.
Speakers: Julie Huguet, Sandeep Kumar Saxena
AI drives innovation in healthcare, agriculture, climate, grounded in shared values (Julie Huguet) AI solutions for citizens—fraud detection, compliance, training, skilling—enhance public welfare (Sandeep Kumar Saxena)
Takeaways
Key takeaways
Franco‑Indian AI collaboration is deepening, with concrete partnership deals in AI, space, healthcare and industry, leveraging French deep‑tech expertise and Indian scale and market reach. Trust is identified as the essential prerequisite for scaling AI; it must be embedded at every layer of the stack, include data lineage, explainability, security, accountability and be validated through regulatory compliance. Different sectors (cloud, quantum, defense, industrial AI) converge on similar trust pillars – traceability, predictability, verifiability, security, human‑in‑the‑loop oversight and end‑to‑end validation. An ecosystem mindset – partnership across companies, research institutes and governments – is required to democratise AI and preserve trust across borders. AI can dramatically accelerate scientific discovery, but equitable access, reproducibility, transparent artifact sharing and standardized evaluation are critical to avoid a reproducibility crisis and the proliferation of AI‑generated false papers. Open‑source scientific foundation models are advocated to accelerate progress, while acknowledging commercial fine‑tuned models may remain proprietary; openness is seen as a strategic choice rather than a requirement. Ethical considerations (carbon‑footprint, responsible use, humane weapons, privacy‑preserving edge models) are integral to trustworthy AI and must be addressed through shared frameworks and guidelines. AI for societal impact – multilingual AGI, personal sovereign edge models, AI‑driven solutions for healthcare, agriculture, fraud detection, skilling and climate – is emphasized as a way to benefit the bottom of the pyramid.
Resolutions and action items
Formal signing of multiple Franco‑Indian partnership agreements (e.g., Dacia Technology‑GT Solutions, ExoTrail‑Druva Space, H‑Company‑St. James Hospital, North France Invest‑TIAB, T‑U‑B) to develop joint AI, space and healthcare solutions. Launch of the MERLIN benchmarking framework by Candela to create a shared baseline for quantum‑AI trust and reproducibility. Business France, LaFrenchTech and partner organisations commit to continue facilitating ecosystem‑wide collaborations and to organise future matchmaking events. IRO (Indian AI Research Organization) will build high‑end talent pipelines, develop compact neurosymbolic models for healthcare, sustainability and pharma, and create an open knowledge‑graph for drug discovery. UNICRI’s responsible‑AI toolkit for law‑enforcement is being piloted in India (and four other countries) with a view to produce policy recommendations and reports. Joint call for transparent artifact sharing and standardized evaluation criteria to improve reproducibility of AI‑generated scientific results (raised by Joelle Pineau). Raj Reddy’s request for a quantitative, measurable matrix to track progress on multilingual AGI and personal sovereign edge models.
Unresolved issues
How to define and implement a universally accepted metric for progress in multilingual AGI and personal sovereign edge models. Balancing open‑source release of scientific foundation models with commercial protection of fine‑tuned models – no consensus on policy or incentives. Ensuring AI‑generated scientific papers are reliable and preventing a flood of false results; concrete standards or verification mechanisms remain undefined. Bridging the digital divide so that AI benefits reach the bottom of the pyramid, especially in rural and low‑literacy populations. Establishing global, harmonised guidelines for responsible AI that are adopted across jurisdictions; current guidelines are fragmented. Operationalising end‑to‑end trust across heterogeneous ecosystems (cloud, edge, quantum, industrial) without a clear governance framework.
Suggested compromises
Combine French deep‑tech depth with Indian speed and market scale to jointly develop and scale trusted AI solutions. Adopt an ecosystem partnership model where each stakeholder contributes specific trust components, preserving overall system integrity. Release open benchmarking tools (e.g., MERLIN) while allowing companies to keep proprietary fine‑tuned models for commercial use. Implement human‑in‑the‑loop oversight for critical AI applications, acknowledging that full automation is not yet trustworthy. Promote open‑source large models (as demonstrated by LLaMA) to counter industry resistance, while encouraging responsible commercial exploitation of derived solutions.
Thought Provoking Comments
Trust is the only way to scale. If you want large corporations, banks, governments to adopt AI, they need to trust us. And when we trust things, scale is possible – just look at how India accepted UPI.
Sets a foundational premise linking trust directly to the ability to achieve scale, framing the entire panel discussion around trust as a prerequisite rather than a side‑effect.
Established the central theme of the session, prompting each panelist to frame their perspectives on AI around trust. It shifted the conversation from generic AI benefits to a focused debate on how trust can be engineered and measured.
Speaker: Arun Sasheesh (Moderator)
I would describe trust in a very simple word: I have your back and I will not fail you. Trust must be built at every layer – from data lineage, explainability, zero‑trust networking to end‑to‑end governance – it cannot be a bolt‑on.
Provides a concrete, multi‑layered definition of trust that moves the discussion from abstract values to specific technical and regulatory components.
Prompted other speakers to elaborate on technical implementations of trust (e.g., Valerian’s pillars, David’s friendly‑hacking). It deepened the technical depth of the dialogue and introduced regulatory context (DPDP, EU AI Act).
Speaker: Neelakantan Venkataraman (Tata Communications)
We see trust as five pillars: trustability (traceability), predictability (knowing limits), verifiability (benchmarking), security, and accountability (clear ownership). We released the MERLIN framework to benchmark quantum‑AI results and build a shared baseline.
Introduces a structured trust framework specific to quantum AI and announces a tangible tool (MERLIN) for community‑wide benchmarking, bridging theory and practice.
Shifted the conversation toward community building and standardisation, influencing later remarks about reproducibility (Joelle Pineau) and the need for shared baselines across France and India.
Speaker: Valerian Ghez (Candela)
Trust is not a label, it’s a proof. We do friendly‑hacking to find vulnerabilities, we ensure explainability for critical decisions, we pursue frugal AI to reduce carbon footprint, and we develop AI‑for‑green to optimise aircraft trajectories.
Frames trust as demonstrable evidence through concrete practices (security testing, explainability, sustainability), expanding the trust narrative beyond technical safeguards to ethical and environmental dimensions.
Added new dimensions (responsibility, sustainability) to the trust discussion, prompting others (e.g., Sandeep and Tanuj) to reference societal impact and scale, and reinforcing the idea that trust must be proven.
Speaker: David Sadek (Thales)
AI adoption must start at the top. I built AI‑driven sales, forecasting and analytics tools for myself, certified every team member, and we now offer ‘AI products made in India for India and the world’. Trust is built iteratively, not overnight.
Highlights leadership‑driven cultural change and the practical rollout of AI products, linking trust to internal adoption and user experience rather than just external compliance.
Broadened the conversation to include organizational change management, influencing Tanuj’s remarks on trust leading to mass adoption and reinforcing the theme that trust is cultivated over time.
Speaker: Sandeep Kumar Saxena (HCL Technologies)
When UPI was launched in 2016 it now handles 21 billion transactions a year, even for digitally illiterate users. Trust built the scale – if you build trust, scale follows automatically.
Provides a powerful, data‑driven illustration of trust translating into massive adoption, grounding the abstract trust‑scale link in a real Indian success story.
Reinforced Arun’s opening claim with empirical evidence, solidifying consensus that trust is the catalyst for scale and prompting other panelists to reference similar Indian examples.
Speaker: Tanuj Mittal (Dassault Systèmes)
We need a quantitative, measurable matrix for multilingual AGI. It’s not enough to claim multilingual capability; we must measure progress. Also, we must create personal sovereign edge models to protect privacy and consider humane weapons that disable rather than destroy.
Challenges the community to move from aspirational statements to measurable outcomes, introduces novel ethical considerations (humane weapons), and stresses privacy‑first edge AI.
Shifted the tone toward accountability and metrics, prompting later discussion on reproducibility (Joelle Pineau) and open‑source vs private models (Joelle and Amit). It added a forward‑looking, ethical dimension to the trust conversation.
Speaker: Raj Reddy (Professor, former Carnegie Mellon)
India is not a product nation; we lack global products despite strong talent. We must build high‑end research capacity, IP pipelines, and ecosystems that turn talent into globally competitive products.
Provides a candid critique of India’s innovation ecosystem, moving the dialogue from partnership to self‑sufficiency and product creation.
Prompted a shift toward discussing how to convert research into marketable products, influencing later remarks about building specific neurosymbolic models (Amit) and the need for ecosystem collaboration (Antoine Petit).
Speaker: Amit Sheth (Founder, IRO)
AI is not just an accelerator; it reverses the scientific method. We now ask for a material with desired properties and AI designs it. This requires new interdisciplinary cooperation and raises the risk of AI‑generated false papers.
Introduces a paradigm‑shifting view of AI as a tool that changes how science is conducted, while also warning about new risks (misinformation).
Expanded the discussion to the meta‑level of scientific methodology, leading to Joelle Pineau’s focus on reproducibility and the broader conversation about responsible AI in research.
Speaker: Antoine Petit (CNRS)
Reproducibility needs transparency and clear evaluation criteria. AI can actually accelerate reproducibility by making artifacts publicly available and running reproducibility challenges.
Addresses a core crisis in AI research, offering concrete solutions (transparency, evaluation) that tie back to trust as proof.
Provided actionable steps for the community, linking back to earlier trust frameworks and reinforcing the need for open standards, which later influenced the open‑source debate.
Speaker: Joelle Pineau (Chief AI Officer)
The UN has developed practical frameworks for responsible AI in law enforcement, now being piloted in India and other countries. Bridging the digital divide requires such shared guidelines and collaborative toolkits.
Highlights global governance efforts and concrete policy tools, emphasizing the role of international cooperation in building trust.
Shifted the conversation from corporate/technical trust to policy and global equity, reinforcing the summit’s theme of “AI for all” and supporting Amit’s call for broader ecosystem collaboration.
Speaker: Irakli Beridze (UNICRI)
Overall Assessment

The discussion coalesced around the central premise that trust is the prerequisite for AI scale. Arun’s opening claim framed trust as the linchpin, and each subsequent speaker deepened this premise from different angles—technical architecture (Neelakantan), quantum‑AI standards (Valerian), security and sustainability (David), organizational culture (Sandeep), real‑world Indian examples (Tanuj), measurable metrics and ethics (Raj Reddy), ecosystem productisation (Amit), paradigm‑shifting scientific methodology (Antoine), reproducibility practices (Joelle), and global governance (Irakli). These pivotal comments acted as turning points, steering the dialogue from abstract enthusiasm to concrete frameworks, metrics, and policy, and ultimately reinforced the summit’s goal of forging a trusted, scalable AI partnership between France and India.

Follow-up Questions
How can we create a multilingual AGI with measurable progress?
Establishing a multilingual artificial general intelligence with clear metrics is crucial for inclusive access and to evaluate real-world impact across diverse language communities.
Speaker: Prof. Raj Reddy
How can AI technologies be effectively delivered to people at the bottom of the socioeconomic pyramid, especially in rural areas?
Ensuring that AI benefits the most vulnerable populations addresses equity concerns and prevents a digital divide that could exacerbate existing inequalities.
Speaker: Prof. Raj Reddy
How can we develop personal, sovereign edge AI models that ensure privacy and operate offline from the cloud?
Personal, on‑device AI models protect user data and privacy, a prerequisite for widespread adoption in sensitive applications such as health and finance.
Speaker: Prof. Raj Reddy
How can we design humane AI‑powered weapons that disable rather than destroy, ensuring ethical use in conflict?
Exploring non‑lethal, AI‑driven defense systems aligns military technology with humanitarian principles and international law.
Speaker: Prof. Raj Reddy
What structural shifts are needed in national research and funding agencies to support interoperable AI scientific ecosystems beyond short‑term pilots?
Long‑term, interoperable ecosystems are essential for sustained AI research impact; identifying needed policy and funding reforms will guide future investments.
Speaker: Prof. Antoine Petit
Is there a need for a dedicated AI‑for‑Science mega‑platform/facility, and what would its scope be?
A centralized platform could provide shared compute, data, and standards, accelerating cross‑disciplinary AI research and reducing duplication of effort.
Speaker: Prof. Antoine Petit
What standards or methodologies are required to ensure AI‑generated scientific discoveries are reliable and reproducible?
Defining transparent evaluation and reproducibility protocols will build confidence in AI‑driven results and prevent the propagation of erroneous findings.
Speaker: Prof. Joelle Pineau
How can the AI community establish open‑source practices for foundational scientific models while balancing commercial interests?
Open‑source scientific models can accelerate innovation, but commercial incentives must be reconciled; guidelines are needed to navigate this tension.
Speaker: Prof. Joelle Pineau and Dr. Amit Sheth
How can global guidelines for responsible AI be harmonized across nations and sectors?
Consistent responsible‑AI frameworks are vital for cross‑border collaboration, trust, and preventing regulatory fragmentation.
Speaker: Mr. Irakli Beridze
How can we break down silos between quantum computing and AI to build a shared community and benchmarking framework?
Integrating quantum and AI research through common benchmarks (e.g., Merlin) will foster reproducibility, accelerate progress, and create a unified ecosystem.
Speaker: Valerian Ghez
How can trust be operationalized across ecosystem partners (e.g., Tata and Thales) to maintain end‑to‑end governance?
Implementing consistent trust mechanisms across partners ensures data integrity, compliance, and reliable AI deployment at scale.
Speaker: Neelakantan Venkataraman
How can AI for law enforcement be implemented responsibly across diverse jurisdictions, ensuring public trust?
Developing adaptable toolkits and policy recommendations is essential to balance security benefits with civil liberties in varied legal contexts.
Speaker: Irakli Beridze
How can AI be leveraged to address climate resilience, agriculture, and energy challenges in countries with limited experimental facilities?
Applying AI to priority sectors can compensate for infrastructure gaps, but requires tailored models and collaborative frameworks to be effective.
Speaker: Prof. Antoine Petit (question posed by Abhay Karandikar)
How can the scientific community prevent the proliferation of false papers generated by AI and maintain research integrity?
Establishing validation mechanisms and ethical standards is critical to safeguard the credibility of AI‑augmented scientific publishing.
Speaker: Prof. Antoine Petit

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Panel Discussion AI in Healthcare India AI Impact Summit

Panel Discussion AI in Healthcare India AI Impact Summit

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, chaired by Dr. Sabine Kapasi, examined how artificial intelligence can transform healthcare, focusing on near-term opportunities for India and other low- and middle-income countries (LMICs) [7-16]. Anthropic’s Chris Ciauri emphasized that while AI offers substantial benefits, it also poses risks if deployed carelessly, and the company’s core mission is to prioritize safety [28-32]. He highlighted two contrasting challenges: in the United States clinicians spend only about 30 % of their time on patient care due to administrative work, whereas in India the primary obstacle is limited access to care, with typical primary-care visits lasting only two minutes [35-40]. To address these issues Anthropic has opened a Bengaluru office and aims to build solutions locally, viewing success in India as a template for the broader global south [47-50].


A concrete use case cited was the partnership with Banner Health, where Anthropic’s model summarized lengthy oncology reports, cutting eight hours of clinician effort to a concise brief and demonstrating the importance of a model that can say “I don’t know” when uncertain [123-130]. In drug discovery, collaborations with companies such as Novo Nordisk and Sanofi have reportedly reduced development cycle times from weeks to hours, illustrating AI’s potential to accelerate therapeutics [136-138]. Recognizing India’s multilingual landscape, Anthropic trained its Claude model on twelve Indic languages to improve accessibility, though further work on dialects remains needed [141-143].


Dr. Aditya Yad described the Swiss-India free-trade agreement, which commits €100 billion of Swiss investment and one million jobs in India, and noted that AI could help lower the high cost of Switzerland’s high-quality but expensive healthcare system [84-94]. He argued that AI-driven optimization of research, drug development, and manufacturing-such as AI-enhanced bioreactors that increase yields while reducing costs-could be a major growth area over the next five years [166-170]. The panel also discussed the rapid evolution of diagnostic technologies, suggesting that home-based screening tools powered by AI could dramatically reduce testing costs and expand access worldwide [173-176].


Both speakers agreed that safety and transparency are non-negotiable; Anthropic insists its models will never train on patient data and will always defer judgment to clinicians, while Swiss policymakers stress the need to build trust in medical data handling [232-244][293-297]. To enable adoption, they highlighted the importance of workforce enablement-training clinicians to use AI as a preparatory aid rather than a decision-maker-and of industry-led programs that teach CEOs how to embed AI from the outset [259-264][247-258]. The discussion concluded that, with continued safe model improvements and targeted small-language applications, AI is poised to reshape healthcare delivery and equity in India and beyond over the coming decade [270-273][278-283].


Keypoints


Major discussion points


AI-driven opportunities for healthcare in India and the Global South


Chris highlighted the contrast between the U.S. administrative burden (only ~30 % of clinician time spent on patient care) and India’s access challenge, noting that AI can reduce paperwork in the U.S. and broaden care access in India [35-41]. He emphasized India’s “digital healthcare system…the envy of the world,” which provides a strong foundation for AI deployment [45-46]. Anthropic’s recent launch of a Bengaluru office underscores its commitment to building solutions locally [47-50]. Multilingual capability was cited as a key Indian need, with Claude now trained on 12 Indic languages [141-143].


Safety and risk management as a non-negotiable foundation


Anthropic’s mission centers on safety, ensuring models can say “I don’t know” rather than give confident but wrong answers [31-33]. The Banner Health case illustrates how a safety-first model can reliably summarize lengthy oncology reports, saving clinicians hours while avoiding hallucinations [120-130]. Chris reiterated that safety is “table stakes” for any healthcare application [121-130][132-136].


Switzerland-India collaboration and AI’s role in cost reduction, drug discovery, and manufacturing


Aditya described the new India-Switzerland free-trade agreement, committing $100 billion of Swiss investment and 1 million jobs, including in healthcare [84-89]. He argued that AI can dramatically lower healthcare costs by accelerating drug target identification, clinical validation, and market access [94-98]. He also pointed to emerging AI-enabled biomanufacturing (biofoundries) that improve yields and reduce production costs, a priority for India’s biotech policy [164-170].


Workforce enablement, education, and adoption challenges


Sabine asked how to train clinicians while keeping AI as a decision-support tool, not a substitute [223-227]. Chris responded that AI should assist (“AI is for preparation, clinicians are for judgment”) and that models must explicitly flag uncertainty [232-236][238-241]. Aditya added that Switzerland’s state-level program trains CEOs of SMEs to embed AI from the start, addressing resistance and ensuring homogeneous, strategic AI use [251-258].


Future outlook: scaling models, edge use cases, and India’s strategic role


Anthropic plans to release increasingly capable and safe models every ~2.5 months, fueling optimism about transformative health-care impact [270-273]. Chris projected a complementary ecosystem of smaller, targeted language models for edge applications, with open-source playing a part [278-286]. He noted that many countries, including India, will likely lead in deploying these niche models [289-290]. Sabine framed the discussion within the 2030 AI-adoption roadmap, emphasizing workforce enablement and non-sexy use cases like drug discovery and diagnostics [259-262].


Overall purpose / goal of the discussion


The panel aimed to identify near-term, high-ROI AI use cases for India and other low- and middle-income countries, explore how AI can strengthen healthcare systems, and outline strategies (policy, investment, workforce training, safety standards) to accelerate adoption of real-world AI solutions[15-18][20-23][104-112].


Overall tone and its evolution


– The conversation opened with a formal, forward-looking tone, acknowledging the event’s slowdown but stressing the importance of AI in healthcare [1-4][7-12].


– It quickly shifted to an optimistic, collaborative tone as Chris described Anthropic’s global expansion and enthusiasm for India [27-34][47-50].


– Mid-discussion, the tone became cautiously pragmatic, focusing on safety, risk, and the need for rigorous validation [31-33][120-130][232-241].


– When addressing policy, investment, and manufacturing, the tone turned strategic and solution-oriented, highlighting concrete initiatives and partnerships [84-89][94-98][164-170].


– The final segment adopted a hopeful yet realistic outlook, balancing excitement about rapid model improvements with acknowledgment of education, trust, and regulatory challenges [270-286][289-290][293-297].


Overall, the dialogue remained constructive and collaborative, moving from broad opportunity framing to detailed considerations of safety, implementation, and future scaling.


Speakers

Chris Ciauri


Role/Title: Managing Director at Anthropic; leads global expansion across EMEA, APAC, and Latin America.


Expertise: Enterprise AI adoption, scaling SaaS and cloud businesses, AI safety, large-language models.


Affiliation: Anthropic [S1]


Dr. Sabine Kapasi


Role/Title: Clinician/surgeon; moderator of the panel discussion.


Expertise: Clinical practice, surgical care, healthcare delivery, AI adoption in clinical settings.


Affiliation: Not specified in transcript (moderator role) [S2]


Dr. Aditya Yad


Role/Title: India Relations Advisor at Invalude (innovation and investment promotion agency of Canton Broad, Switzerland); biotechnologist; policymaker/legislator in Switzerland.


Expertise: Biotechnology, cross-border innovation partnerships, Swiss-India collaboration, healthcare policy, AI-enabled drug discovery and manufacturing.


Affiliation: Invalude, Canton Broad, Switzerland [S4]


Additional speakers:


– Rizwan Sir (mentioned as absent)


– R. S. Sharma Sir (mentioned as absent)


No further speakers were identified in the transcript.


Full session reportComprehensive analysis and detailed insights

Opening remarks – Dr Sabine Kapasi noted that the final day of the AI Impact Summit was slower than the preceding three days and that two senior speakers, Rizwan Sir and R S Sharma Sir, were absent despite their stature in Indian health-system digitalisation and global public-infrastructure standards [1-6]. She introduced the session’s focus on how artificial intelligence can transform health care, especially in India and other low- and middle-income countries (LMICs) [13-15].


Introductions – The co-panelists were Chris Ciauri, Managing Director of Anthropic and former senior executive at Salesforce, Google Cloud, and CEO of Unilever [7-10] (as stated in the transcript), and Dr Aditya Yad, India Relations Advisor at Invalude [19-23].


1. Safety-first stance

Anthropic’s mission is to balance powerful model capability with rigorous safety, ensuring the system can say “I don’t know” rather than give confident but incorrect answers [31-33]. The company commits that Claude will never be trained on patient data and that uncertainty handling is “table stakes” for any health-care application [232-236][242-244].


2. Use-case categories

* Administrative burden – In the United States only about 30 % of a clinician’s time is spent on patient care because of paperwork [36-38]; globally this represents a $1 trillion opportunity for AI-driven workflow automation [132-135]. Anthropic’s partnership with Banner Health showed Claude summarising a 100-page oncology report in minutes, cutting eight hours of clinician effort [123-130].


* Drug discovery & regulatory sciences – Collaborations with Novo Nordisk and Sanofi have reportedly shrunk development cycles from eight weeks to eight hours [136-138]. Dr Yad highlighted that AI can also accelerate target identification, clinical validation, market access and streamline regulatory and clinical-trial processes [94-98].


* Multilingual access – Claude has been trained on twelve Indic languages, with further work on dialects required to make AI-enabled health care truly inclusive [141-143].


* Manufacturing & biofoundries – AI-enhanced small-scale bioreactors (biofoundries) improve yields and reduce production costs, aligning with India’s national bio-manufacturing policy [164-170].


* Diagnostics & preventive screening – Rapid advances in home-based diagnostic technology can lower testing costs and expand screening, but insurers often resist paying for tests when patients feel well [173-176][210-212]. Dr Kapasi cited AlphaFold as a precedent for how large-model AI can transform protein discovery and biotech [173-176].


3. Ecosystem & policy context

The new India-Switzerland free-trade agreement commits $100 bn of Swiss investment and the creation of one million jobs in India, including in health-care [84-89]. Switzerland’s high-quality but expensive health system can benefit from AI-driven cost reductions in drug development and market access [94-98].


India’s cloud landscape was described as “the highest adoption of cloud outside the US; it is second in the world,” making the country a natural laboratory for Anthropic’s growth [186-190].


4. Workforce enablement

* AI should assist clinicians (“AI is for preparation, clinicians are for judgment”) and must explicitly flag uncertainty [223-227].


* A state-run programme aims to train 40 000 SME CEOs on AI integration, addressing fragmented adoption challenges [251-258].


* Public trust in the handling of personal medical data is a prerequisite for wider AI adoption, requiring transparent governance and consent mechanisms [293-296].


5. Future outlook

Anthropic’s latest release, Claude 4.6 (the “Pro Max” model), was highlighted as a step forward in safety-first capabilities [120-122]. The company follows a rapid release cadence (approximately every 2.5 months), with Ciauri stating, “I’m extremely optimistic” about the impact of these improvements [270-273]. The emerging ecosystem will combine ever-more capable large LLMs with smaller, language-specific edge models for niche use cases [278-286][289-290].


Key take-aways

1. AI can markedly reduce clinicians’ administrative load and increase patient-care time [132-135].


2. India’s extensive digital health-record infrastructure and high cloud adoption provide a strong platform for AI at scale [186-190].


3. The India-Switzerland partnership creates a strategic financing framework for health-tech [84-89].


4. Safety-first design, exemplified by Claude’s “I don’t know” responses, is non-negotiable [31-33][232-236][242-244].


5. Large-language models can accelerate drug discovery, regulatory processes, and biomanufacturing, while multilingual models expand access [136-138][141-143][164-170].


6. Workforce enablement-training clinicians, CEOs, and SMEs-is essential for responsible AI deployment [223-227][251-258].


7. Building public trust around medical data is a prerequisite for broader adoption [293-296].


8. The future AI ecosystem will blend ever-more capable large models with targeted small-language edge models [278-286][289-290].


Closing – Dr Kapasi reflected on the 2030 AI-adoption roadmap, emphasizing workforce enablement, B2B workflow optimisation, and the development of bots to support frontline health workers [259-263]. Ciauri expressed confidence that continual, exponential improvements in safe AI models will unlock transformative health-care benefits [270-273][274-283]. The session concluded with gratitude to the participants, an invitation to the next AI Summit in Geneva, and a shared hope that equitable AI will reshape health systems worldwide over the next five years [298-304].


Session transcriptComplete transcript of the session
Dr. Sabine Kapasi

The last day of the event is a little slow today. You know the energy of the last three days seems to have gotten people a little right. A big week. Yeah, I know. So today, unfortunately, we don’t have a couple of people who are supposed to be here, namely Rizwan sir as well as R .S. Sharma sir. Both of them stalwarts in the industry of setting context in both Indian healthcare systems but also in setting up global standards for digital public infrastructure in India. But let’s make do without them for today. So today, we are talking about AI in healthcare, right? I’m Dr. Sabine Kapasi. We have Dr. Aditya as well as Chris here with us.

I’ll give their intro in a bit. We recognize today that AI will transform healthcare. Given that India and many other… Other low – and medium -income countries have very low levels of digital adoption, though… it’s important to determine where AI solutions are likely to have the largest ROI rather largest opportunity in the next 3 to 5 years. So in addition we also need to ensure that doctors, hospitals and other healthcare professionals are getting ready to leverage AI as well. So today we are going to focus on identifying near term opportunities for India and India as a leader in the LMIC space. That’s low and medium income countries space. And discuss strategies to strengthen the healthcare system for adoption of real use cases of AI.

I think that’s going to be one of the challenges as well as one of the longest value gain that we are able to deliver as we go. So before we go ahead I would love to introduce my co -panelists here. Chris is the managing director at Anthropic. He leads. Global expansion across EMEA, APAC and Latin. with over 25 years of experience scaling SaaS and cloud businesses, including senior leadership roles at Salesforce, Google Cloud, and most recently as the CEO of Unilever, he brings deep expertise in enterprise AI adoption and national technology growth. He is known for building high -performance global teams and driving transformation through collaborative leadership. Thanks a lot, Chris, for being here.

Chris Ciauri

Thank you for having me.

Dr. Sabine Kapasi

Before we introduce Dr. Aditya, I would love to throw a question to you. So how does Anthropic, as a company now, view opportunities in healthcare AI, not just in the U .S. and in Western Europe, but also countries like India and the global south, of how AI is being adopted, especially in the healthcare industry?

Chris Ciauri

Thank you for having me. I’d say we think healthcare, is certainly one of the areas where we’re going to be able to do a lot of things. AI can do a lot of good. It also can create a lot of harm if done carelessly. And Anthropic was founded with a mission around safety, and we focus a lot on that. So we like the tension between capability of AI models but also making sure that the safety is right so that we can deliver on some of the opportunities. I think maybe I’ll use two examples just to frame areas that we think big impact can happen. And I’ll use a U .S. example, I’ll use an India example.

If you think about certainly one of the biggest challenges in the U .S., India has this too, Sangeeta mentioned some of this, but it’s really around the burden of administration. So in the U .S., only 30 % of a clinician’s time, a doctor’s time, is spent on patient care. The rest is on paperwork and administrative tasks. I think in India, one of the biggest challenges is just access. So, you know, there’s data over the last decade that says that, you know, the average primary care visit only lasts two minutes. So if you think about where AI can impact those, and we believe it can have a huge impact, you know, if we can decrease the paperwork, decrease the administrative burden, we can have doctors in the U .S.

and other places spending much more time on patient care. Huge outcome. Can have phenomenal ROI. In India, you know, we think we can help. Solutions like ours can help. Make your health. Care system much more broadly accessible. And the other thing that’s uniquely exciting about India is. you’ve built a digital healthcare system that’s the envy of the world. And we look at that with excitement because we think that gives the AI a really great place to land when you’ve got that kind of digital infrastructure. So maybe my last comment for those that sort of don’t know Anthropic and don’t know we’re up to, we’re so excited about opportunities like this that we announced that we launched our operations recently.

Recently, we’ve opened an office in Bengaluru because we think to address a problem like this, we want to be here on the ground building with you. And we also think, you know, people have talked about the scale of India, leader of the global south. If we can make this work in India, we think we have the possibility to shape how AI -driven healthcare evolves in the rest of the world.

Dr. Sabine Kapasi

No, I think you’re right. Right, and you have worked with every tech company under the sun. which is amazing. Someday you’ll have to tell me what that looks like because, God, I’m a little further away from tech. I’m a clinician by…

Chris Ciauri

I’m as far away from being a clinician as you are from being a technologist.

Dr. Sabine Kapasi

I think that shouldn’t be so, right? And coming back, that is where I think it would be great to introduce Dr. Aditya Yad. So Dr. Aditya is the India Relations Advisor at Invalude, the innovation and investment promotion agency of Canton Broad, Switzerland. Based in Lausanne, he plays a strategic role in strengthening Switzerland -India collaboration by facilitating cross -border partnerships, supporting high -growth startups, and enabling market entry for Indian companies into the Swiss and European innovation ecosystem. He himself is a biotechnologist and has worked in the interaction or rather at the cross -section of tech, investment, and innovation. He has done a lot of investments and I believe biotech as well. With a focus on technology and research -driven enterprises and global expansion pathways, Aditya acts as a key bridge between Indian entrepreneurs, investors, academic institutions, and the vibrant innovation landscape of what is more as vibrant?

Honestly, I doubt it. India is far more vibrant, to say the least. We can have a debate on that, yeah? Yeah, no, maybe we’ll discuss that in a bit. But, you know, as we mentioned, you are as far from being a clinician as I am probably from being a technologist, but we need a middle bridge. And when we are talking about healthcare and AI systems, we need a middle bridge. So you have looked at ecosystems on both fronts, right, and innovation happening on both fronts in India and Switzerland. Switzerland having a deep research in biotech as well as a deep legacy of research in biotech, and now adoption of new technologies. New technologies on top of that legacy versus India who has leapfrogged into an area of…

of fast growth and fast technology adoption. How do you see these two systems playing out and interacting with each other for a larger good in outcomes, especially when we are looking at health care?

Dr. Aditya Yad

Thank you for the question and for the invitation. So, you know, as you said, Switzerland and India, when you look at the size of the country, the size of the population, of course there will be very different challenges for both countries. On the Swiss side, to continue on the debate, you know, Switzerland has been ranked number one in the Global Innovation Index for the past 15 years straight. And largely part of that is thanks to the health care industry, the biotech, the pharma, the life census industry. Today we have around 1 ,700 companies or research institutions based in Switzerland for a small country like this that is really giving this vibrant ecosystem of innovation that we have. The second point also is that, you know, Switzerland is not a big domestic market, right?

We are 9 million people. So all the products…

Dr. Sabine Kapasi

That’s not even Delhi. Like, that’s not even Delhi.

Dr. Aditya Yad

That’s why I usually like to have this scale, you know. Somehow, you know, by just a parenthesis, so India and Switzerland have signed this free trade agreement now, right? So we are concretely in business between Switzerland and India. Part of that free trade agreement that was signed by both governments, now Switzerland and the EFTA countries now has a commitment to invest $100 billion into India in various sectors. That includes also healthcare, by the way. And to create also 1 million jobs, direct jobs in India in the next 15 years. So now there’s a concrete engagement with both countries. So when it comes to healthcare, you know, Switzerland is known for two things. A very efficient and highly qualitative healthcare system, but a very expensive healthcare system.

So you get the price and the quality that goes with it. So this is where AI is actually going to play a very big role. If you talk about cost reduction, optimization of all the processes from research to putting medication on the market, using technology, using AI will tremendously, we believe, bring down the cost of health care. People are now, because I’m also a policymaker in Switzerland, I’m a legislator, in the public debate there’s a lot of heat or a lot of pressure from the public that health care premiums are too high for what they’re getting. So this is exactly the very easy definition where we can say, okay, now we have these tools that can accelerate drug development, not to spend a few billions in developing a new drug, but using AI tools to speed up the process, to increase the probability of finding the right targets and clinical validation and market access.

So this is where I think there is a very big potential. And from the industry’s perspective, we also see that a lot of companies are now So either shifting from traditional pathways into AI initiatives, or the smaller companies now, the startup companies with which we work, they include and they embed AI strategy within the development of their company overall. So it’s become completely normal that AI has to be included from the very start. And this is also what startup companies, new innovative products, they’re using in their own pitch in order to convince also investors to get these investments. Last year, I just published this report. So we had $2 .5 billion of investment going into Swiss startups just for last year.

Many of them are using and have been able to raise funds because they’ve been integrating AI tools in their development in that sense.

Dr. Sabine Kapasi

Thank you so much, Aditya. Chris, back to you. So first of all, I’m really glad that we have people who have been making for healthcare systems but are not native to healthcare. Because… sometimes when we think about healthcare, we only think about doctors, right? Or we think about hospitals. But thankfully, we know that healthcare is so much more. It’s not just about doctors or hospitals. And you as a company, of course, as I said, you have worked across, so please feel free to share your experience across several different domains that you have worked around, but have worked with several technology companies that were not native to healthcare, but now see healthcare as a huge opportunity as well.

So which are the specific use cases that companies like Anthropic view or are targeting for to solve healthcare problems that they are looking to solve? And how do you test for the risks, especially when you’re building LLMs which are quite generalized? Because healthcare has a very immediate outcome of risk, and that’s something that needs to be tested for or at least covered for. So how do you guys look at it?

Chris Ciauri

Maybe I’ll do the risk first, and then I’ll talk about a few use cases. And by the way, thank you for the comments that basically date me, because you know all the companies I’ve been around a long time. But I think I’m privileged to be part of Anthropic and what’s going on right now in AI, because I think by far this has the opportunity to transform health care more than any other technology transformation that we’ve seen over the last three decades or so. But coming back to the risk point, I think I made the point up front that AI can do a lot of good in health care, and it can do a lot of harm if you’re not very careful about the way you use it.

I think because we’ve been so focused on safety, Claude, uses language like, I don’t know. and I’m not certain quite freely. And we think that’s critical in an industry where the stakes are so high. And I’ll give you one example. One of our customers is Banner Health in the United States. They’ve used us to summarize 100 -page oncology reports where previously a clinician comes in, and they’re getting information that was across multiple appointments and specialists, and it took them eight hours just to get to the point that they could start to provide an opinion, care, judgment. That is now summarized concisely. So all of that time, all that administrative time or information retrieval time, is now quickly moving into judgment and delivering care for patients.

So that’s both a use case, but it’s also like, you know, I think a demonstration. of why did they choose us? Ultimately, the final decision came down to they wanted a model that was so based in safety that it said, I don’t know, or I’m not certain. What they didn’t want was a model that was confident or felt confident, but it was wrong. So I think safety is paramount for us. We think it’s table stakes in health care in everything we do. Maybe use cases, and I briefly hit on this before, but I think certainly administrative burden is one, and we think that’s pervasive everywhere. That speaks to your take costs out of the system issue.

The 70 % of time that I talked about doctors in the U .S. spending on admin, that’s a $1 trillion problem or opportunity. So the magnitude, if we could start to address, things like that with AI globally is huge. A second one, drug discovery. So in some of our work with customers like Novo Nordisk and Sanofi, we’ve been able to reduce drug development, life cycle times, heavy paperwork, heavy regulation from eight weeks to eight hours. Like just phenomenal difference in terms of how quickly we could get amazing drugs and medicines to market if that becomes pervasive. The last one, I’ll be really India -specific. You know, I mentioned access here is a challenge, and that certainly if the country can get to the point where they can get the health care system to serve all Indians, that would be game -changing for India and I think for the global south.

One of the big barriers is multilingual. So. So you can’t use a model that’s good in English, but it’s not good in other languages. So as part of our entry into Indian over the last six months, we’ve trained Claude on 12 Indic languages, and that’s not to say that it’s done and over, there’s more languages, there’s dialects, but I think those are the types of things where AI can improve access for health care.

Dr. Sabine Kapasi

No, I think just about 15 days ago when you guys launched your Pro model, Pro Max, I think that was the one, right?

Chris Ciauri

It was a couple of weeks ago, it was our version, Claude 4 .6.

Dr. Sabine Kapasi

Yes, and I had a friend call me up and seeing the stock market very heavily when you guys launched your recent version, and they’re like, you know, there are jobs that are in serious danger. So you should actually, and I remember… I remember telling my team to go back and start playing around with… with the new version because the 11 attachments that have come through have been fascinatingly interesting in the way they are going to be adopting new workflows. But one of the things that EI is adding a lot of value in is in B2B space. So before we go back to that and we’ll talk a little bit more about how EI is shifting biotech as an industry because healthcare is not just about patient delivery but it’s also about how we get there as you touched upon in drug discovery.

And I think AlphaFold was a phenomenal change the way new protein discoveries are being done today. And I mean I could not believe that in my lifetime I have seen such a jump in technology and there is so much more to come. So it could not have been possible without large scale models. But that being said, when we look at countries like India and this is when I was practicing as a surgeon, I used to see my OPD used to consist of 200 people a day. So the amount of time we spend per patient in the lower half of the world, let’s put it that way, is very different. The workflows are very different. And even though the clinical logic might remain the same, the clinical skills deployed are extremely different in terms of action items.

So, once we, I’ll circle back to you on that, but we would love to discuss how you are adopting those kind of use cases to deliver value, not just in countries which have optimized for outcome, especially optimized time for outcome, but use the same principles for a global scale outcome difference as well. But before we do that, I would love to have your thoughts about how, in your perspective, you know, drug discovery, clinical trials, regulatory sciences, as well as manufacturing, is being affected by the new innovations in AI, and how do you see that happening in the next 10 years? Or five years, I think. AI is moving fast enough. We can’t predict 10 years at this point.

Dr. Aditya Yad

Hello, hello. One thing, so we touched upon drug discovery and the impact of AI in the healthcare system in general, so with hospitals, with clinics, and so on. Manufacturing is actually very interesting, and it’s also very relevant to India because there’s a new policy also in India to have this biofoundry. So biomanufacturing in general in India has become a national priority. And the inclusion of AI tools into this is very, very relevant. We see some companies already shifting into that also in Switzerland, also smaller companies. If you really talk about the manufacturing of different products in a controlled environment where parameters are being monitored over time and self -learning about optimizing those parameters in order to increase yields, for instance, or increase.

The ROIs on these production systems is becoming a very interesting trend. We have a bunch of companies. We have the large companies, Novartis, Roche, Lonza. producing massively but you now have these what you call bioreactors that are smaller in size but more qualitative because they can use the AI tools and so you are able to produce very highly qualitative products that could be very expensive but because AI is being used in a small control with a better yield the prices and the production costs are being limited so this is typically one example where from the very industry’s perspective there is a big interest and a big potential over the next few years because at some point these drugs they will have to be manufactured, they will have to be distributed they will have to reach the patients at the end of the day this is the main goal right so how do we streamline all the process from R &D to the drug delivery at the patient level all of these can be used and AI being infused into all of these different steps that will be the challenge but on manufacturing I think there is a lot of potential in the next five years.

Dr. Sabine Kapasi

Thank you so much. I think one of the things that also has a massive potential is screening and also health from an insurance and finance perspective. One of the things that are shifting a lot of use cases and creating a lot of different frameworks are the way we now have diagnostic technology evolve so fast that we can take it to people’s homes directly and make sure that the signals that we are capturing or biomarkers that we are discovering at pace reduces costs of testing drastically so that screening as a solution becomes available across the world and not just specified to areas where large capex into diagnostic capacity building is being evolved. So I think that is one thing I would love to have some thoughts on.

I’m so sorry, I was off there for a bit. Okay. As we spoke about in a bit, that, you know, you said countries like India and other low -income countries, or rather other global south countries. I mean, I would never say low -income anymore. Look at us right now. But other global south countries are shifting now in terms of adoption. And at least countries like India have massive digital adoption coming through as well. In urban areas, there’s close to 90 % smartphone ownership. In rural also, it’s touching 75%, which is just fabulous and mind -boggling altogether. But in such ecosystems, how do you see the potential of such markets actually developing solutions within healthcare that shift the perspectives of the low -wage, or rather in the developed markets as well?

Chris Ciauri

You know, this, I’ll share some statistics that you might find surprising based on what you just said. So India has the highest adoption of cloned. outside the U .S. It’s second in the world for cloud adoption. So, yes, it’s global south, but your country is consuming AI. It’s probably all of my travels I’ve not seen or felt a country that’s more optimistic about the potential of this technology. It’s also, by the way, it’s second in usage, but it’s the fastest growing. So in the last four months, the usage of cloud and the revenue for Anthropic has doubled in India. So I wanted to make that point. But I think if we think about what does that mean, I think that means given the context of an amazing digital health record system that you’ve built, which is not just unprecedented in the global south.

It’s one of the few globally. And it really does give you and us, I think, the ability to… to do something quite special across… you know the largest democracy in the world a massive population that’s also uh got the additional challenge of multilingual and and it’s why i said at the beginning i think if if you can do something you know we can make this work in a place like india not only does this sort of give the the global south i think a model i think it gives the whole world a model of how we could really see health

Dr. Sabine Kapasi

care transform exactly i think some of the ways for example if you um i’ll take a you know take a case from my own stories um we had a patient who was a dengue patient we knew clinically there’s a dengue patient we were working in a very very remote region so we had no access to diagnostic tests at that time and we just started treating them the reason is the drugs were cheaper than the diagnostic tests and the patient could afford any so for the system at that time it was a trial and error problem but the clinical values were stark enough for us to know that this was not actually a risky case. This was almost certainly a dangerous case, even though we didn’t have the data to back it up.

So I think taking those clinical validations and those clinical intelligence, combining them with the new discoveries and biomarkers that now with new technologies are coming in every day and making diagnosis and drugs affordable across the world, that I think is going to be the next big leap in the healthcare ecosystem, and it is going to make a fundamental shift within that ecosystem, at least in my understanding. And you guys are at the front

Chris Ciauri

I completely agree. If you think about that situation you described there and the one before when you said you used to see 200 patients a day, I’ve been very lucky. I think those of us that work in technology companies that get to help businesses and different sectors. do things better. I always feel like we’re lucky because normally I’d get to spend 30 minutes with you and I’d say, tell me what’s going on. Like if you could, if you could fix one thing, what would, what would mean that, um, you know, you’d be so efficient in the care that you gave that you could extend the time with patients and you could provide a way that a hundred of them could, could self -serve.

Um, I mean, I think those are the kinds of conversations that we get to have, um, as this technology, uh, you know, hits a sector like healthcare.

Dr. Sabine Kapasi

No, of course. Uh, and he’ll take it off the stage later, but, uh, to your note, you have been a biotechnologist and a researcher yourself. How do you see, and we all know in healthcare prevention is better than cure. I mean, that’s been said, but as a system, we have never really adopted it at scale. So how do you see diagnostic technology evolving through the use of AI in the next few years? And how do you see that playing out for countries

Dr. Aditya Yad

diagnosed and prevented from the treatment. So the cost of having maybe a diagnostic tool that costs two or three or four times more than the current one was not enough to convince insurance companies to pay for that in order to avoid a $20 ,000 or $50 ,000 therapy for cancer for instance. So the narrative and the whole system is being built around that, around treatment. If we can use and prove that AI can have a significant impact both on the quality of the diagnostic but also on the cost of developing these diagnostics and being put on the market, then I think we can really make a change in terms of how we see healthcare as a system as well.

Dr. Sabine Kapasi

No, I think that’s true. So we have been talking to medical device companies who are now targeting new age diagnostic tools and even companies, legacy companies like GE and Philips. And one of the larger problems that they face is that adoption of diagnostics is an issue because you essentially are talking about, especially in screening and very much so in screening, because if you feel there is a problem with your body, you will certainly want to figure out a way to solve it, be it through self -pay models, which is prevalent in this part of the world, or through insurance models. But screening basically precedes the need for healthcare. It precedes when you feel you have a problem with your body.

So how do you make people pay when they don’t see the need for it? So I think one of the things, throwing that back to you, is two things, two questions. One point is that, you know, healthcare is an industry because we are quite, we understand the risks of using any tool, be it pharma tool, be it AI tool, in the healthcare industry, we know the risks are quite immediate and sometimes life -threatening. So we are always quite skeptical in how we position ourselves. We position the tool. So how do you train the healthcare workforce to adopt it while also keeping a layer of difference within the tool as well as people adopting it? Because you don’t want people acting on health advisors from an LLM today.

No, of course. That’s one. So how do you create the education for the healthcare workforce strong enough, but also ensure that people are not directly acting on those advisors? And secondly, how do you educate the ecosystem better so that such processes like screening and making sure that you precede the need, especially in healthcare, precede the felt need in healthcare? How do you rather execute those kind of strategies? These two are questions I would love for you to give me a thought on.

Chris Ciauri

Maybe on the second one, I might see if you have an opinion because I’m an AI person and not on the clinician side. On the first one, I think. I think you have to be clear, and we certainly have clarity on this in dealing with health care systems and customers in the health sector and pharma sector. You know, AI is for preparation. Clinicians are for judgment. So we have no intention of Claude being a doctor or a nurse. Thank God. Doctors are really scared. I think you have to be clear about that. And it goes back to what I said before, because these models, if they’re really going to serve health care, you know, Claude’s going to know when to say, I don’t know.

I’m not certain of that. Like, this is where you need to go, you know, and we’re going to have this conversation together with the clinician. And we think that’s table stakes. The same thing with, you know, Claude will never use someone’s patient data to train our models. And I think that’s the key. And I think that’s the key. If you don’t make those your non -negotiables, then AI will not get to have the impact that it should have.

Dr. Sabine Kapasi

No, that’s true. And if you can answer that.

Dr. Aditya Yad

Maybe just to weigh in on that, because it is true what you just said. So the importance of AI being with the companies, with the innovation segment, and not necessarily at the end. Because, as you said, there can be also pros and cons on that usage. So, for instance, I’ll give you one practical example with our state government. We launched a program just one year ago because we noticed that especially smaller companies, small mid -sized companies developing these new technologies, how did they embrace AI? And there is still a little bit of caution about how to use that, how to not make a mistake from the beginning of the process and then go into a direction that was not anticipated.

So we launched a program a year ago where we have cohorts of companies that is being used. We have a lot of companies that are being funded by the state government. where we talk directly with the CEOs of these companies, and we have a leadership program to train CEOs to think how they can implement AI from the very start of this process. The challenge we have is that in our state, for instance, we have 40 ,000 companies, SMEs. How do we convince them to go and to embrace AI, and how do we sell them the benefit of using AI, because there’s also resistance to change on that level. So as long as we don’t know who is going to really take the lead on using the AI tools, then everybody will be using more or less of it, and then there will not be a homogeneous application of AI.

So here we said, okay, the industry needs to take the charge of doing that, and then we’re going to train the people, as you said, on the use of AI, not just as a technological tool, but as a strategic roadmap for the company going forward.

Dr. Sabine Kapasi

So, see, we are in 2026 today. four years away from 2030 and a lot of plans that a lot of countries have made for adoption in 2030, so India included. There are a couple of areas like workforce enablement for AI adoption, especially when we are talking about healthcare. We are looking at solving B2B cases, workflow management, time enhancement of the skilled workforces and also create some level of skilled enhancement of creating bots or other agentic systems which can in some proportion aid people who are not as credentialed, let’s put it that way, so enabling a frontline workforce or enabling a GP to solve some cases which may otherwise be referred to a higher center. So reduce burden and distribute the burdens in healthcare systems.

I think that is one use. In this case, that is something that I would love for you to throw some light on, Chris. and I think also the non -sexy use cases, like the drug discovery cases, which are phenomenal not just for business but also for changing the world as we know it, and diagnostics. So if you can throw light on these three verticals and how you see them panning out in the next five years, that would be really, really helpful.

Chris Ciauri

I mean, I might sort of frame it with this. I think, and you talked a little bit about it. Your friends have kind of seen what Claude, sort of the latest version of Claude, and what we’ve seen in the last, Anthropic’s a five -year -old company. We’ve had a commercial model, or a frontier lab. We’ve had a commercial model on the market for coming up on three years, and each model, which now we’re at a rate of every two and a half months a model releases, is exponentially more intelligent, more powerful than the last one, and safe. And I think that’s really important. we don’t see that stopping. So I think what makes me extremely optimistic about the ability to really transform health care on many dimensions that you talked about is this technology will get better and more enabling.

As long as we do it incredibly safely, the benefits are probably hard for us to imagine with how fast it’s moving, but I think it’s a tremendous opportunity.

Dr. Sabine Kapasi

Just before we close this, one more thing. LLMs versus small language models, right? Targeted use cases. How do you see the industry evolving in the next five years in health care? Targeted use cases?

Chris Ciauri

Yeah, I think what you will likely see is smaller targeted use cases will have a place. You know, maybe that’s out on the edge in specific things, and open source will have a place in that. I think as a frontier lab, we have one model. It’s Claude. It comes in a few versions, you know, so that it can be scaled up and down for different use cases. Our position in the market is, like, let’s make Claude the most capable and safe model that we possibly can. Let’s keep that exponential innovation going, because our place in the market is going to be to drive the greatest amount of innovation and transformation. And there will definitely be a place for more edge use cases with smaller models.

I just think it’ll be a great place. It’ll play out differently.

Dr. Sabine Kapasi

And you see countries like India playing out an interesting role in that? Sorry? How do you see countries like India playing out an interesting role in that?

Chris Ciauri

Yeah, I mean, I think we’ll see many countries probably playing more on the small language edge use case side. Today, the frontier is in a couple of countries, but I think there will be opportunities that we can’t see.

Dr. Sabine Kapasi

Thank you so much. And Aditya, as a policymaker, if you can throw a very quick light on where do you see the next five years spanning out in terms of, and what are the things that we will need to be careful about that we don’t see today?

Dr. Aditya Yad

I think in general, the trust around data and personal data, medical data is still a debate. So this is ongoing. There’s a lot of awareness building to be made. We have to gain the trust of the people that they trust the systems, what is happening with the data, where is the data flowing, and how do they see the ultimate benefit from them. using this data. From that point onwards, we can really build different systems, we can think about new things, but that is still something that we have to work on.

Dr. Sabine Kapasi

Thank you so much, Aditya. Thanks, Chris, for joining us, and essentially creating equity and AI, which is useful for all, especially for something like healthcare, is something that we all strive for and hope that this is going to change the world in the next five years. Thank you so much for joining us for this chat. Namaste. Thank you. And see you all in Geneva next year, because the AI Summit will be in Switzerland next year. So we’re all welcome there as well. Yes, of course. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (41)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Anthropic’s mission is to balance powerful model capability with rigorous safety, ensuring the system can say “I don’t know” rather than give confident but incorrect answers.”

The knowledge base notes that Anthropic’s model was chosen specifically because it can respond with “I don’t know” or express uncertainty, reflecting a safety-first design [S3].

Additional Contextmedium

“Anthropic emphasizes safety in its models, including features that let Claude terminate harmful or abusive conversations.”

Anthropic announced a safety feature that allows Claude Opus 4 and 4.1 to end conversations in extreme cases of harmful user input, illustrating the company’s broader safety focus [S110].

Additional Contextmedium

“Anthropic’s partnership with Google aims to achieve the highest standards of AI safety.”

Google and Anthropic have announced an expanded partnership specifically to develop and deploy AI responsibly with strong safety standards, reinforcing Anthropic’s safety-first stance [S115].

Additional Contextmedium

“Anthropic has launched Claude Life Sciences to support biotechnology research, indicating its involvement in drug‑discovery applications.”

Anthropic unveiled Claude for Life Sciences, integrating its models with scientific tools to accelerate research workflows, which aligns with the report’s claim about AI-enabled drug discovery and regulatory science [S116].

External Sources (117)
S1
Panel Discussion AI in Healthcare India AI Impact Summit — -Chris Ciauri: Managing Director at Anthropic, leads global expansion across EMEA, APAC and Latin America. Has over 25 y…
S2
Panel Discussion AI in Healthcare India AI Impact Summit — -Dr. Sabine Kapasi: Clinician/surgeon, discussion moderator. Has experience practicing as a surgeon and seeing 200 patie…
S3
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-ai-in-healthcare-india-ai-impact-summit — And I think AlphaFold was a phenomenal change the way new protein discoveries are being done today. And I mean I could n…
S4
Panel Discussion AI in Healthcare India AI Impact Summit — I think that shouldn’t be so, right? And coming back, that is where I think it would be great to introduce Dr. Aditya Ya…
S5
From principles to practice: Governing advanced AI in action — Juha identifies the challenge of multiple overlapping initiatives creating excessive monitoring and reporting requiremen…
S6
Building Indias Digital and Industrial Future with AI — As India advances in digital public infrastructure and its AI ambitions, the key is how we ensure these systems remain t…
S7
AI for Social Good Using Technology to Create Real-World Impact — So I think I have to answer this in two parts. The first part is how do we basically leverage what Nandan refers to as t…
S8
Digital Health at the crossroads of human rights, AI governance, and e-trade (SouthCentre) — Furthermore, the handling of health data was seen as requiring a delicate balance between individual and collective righ…
S9
Keynote-Dario Amodei — Anthropic is making substantial commitments to India through establishing a Bengaluru office, hiring local leadership, a…
S10
Anthropic launches Bengaluru office to drive responsible AI in India — AI firm Anthropic, the company behind the Claude AI chatbot,is openingits first office in India, choosing Bengaluru as i…
S11
AI safety concerns grow after new study on misaligned behaviour — AIcontinuesto evolve rapidly, but new research reveals troubling risks that could undermine its benefits. A recent study…
S12
Science as a Growth Engine: Navigating the Funding and Translation Challenge — Beyond just running trials more efficiently, though, there are a number of things that I think would transform this bott…
S13
AI revolutionises drug discovery with promising new treatments — AI istransformingthe way new medicines are developed, with AI-powered drug discovery advancing at an unprecedented pace….
S14
Transcript from the hearing — And if if, again, if the path continues, I think we could get to a very dangerous place. I think it’s worth saying some …
S15
Claude AI gains power to end harmful chats — Anthropichas unveiled a new capability in its Claude AI modelsthat allows them to end conversations they deem harmful or…
S16
Building the Workforce_ AI for Viksit Bharat 2047 — From the community health worker delivering nutrition to an expecting mother to the balancing worker strategizing access…
S17
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — India focuses on smaller models for specific use cases rather than chasing trillion-parameter models
S18
https://dig.watch/event/india-ai-impact-summit-2026/impact-the-role-of-ai-how-artificial-intelligence-is-changing-everything — So that is a very good question. I think when you’re part of a group like the Tata’s that’s sort of spanning all sectors…
S19
The fading of human agency in automated systems — In many settings, humans retain formal accountability while losing meaningful influence over outcomes. When a decision i…
S20
I NTRODUCTION — – Standardized finance, procurement, and HR functions for operational consistency. – Enhanced security and compliance, a…
S21
Global AI Policy Framework: International Cooperation and Historical Perspectives — But so there’s that notion as well. And then I really think that, you know, I’m an Indian. I’m not saying this because I…
S22
Global South at the heart of India AI plan — India hasunveiledthe New Delhi Frontier AI Impact Commitments, a new initiative aimed at promoting inclusive and respons…
S23
Digital ECOnOMy POliCy lEgal inStRuMEntS — Accordingly, the risk-related language has been clarified. It was noted during the drafting process that the dictionar…
S24
NATIONAL CYBER SECURITY POLICY 2021 — Cyber risks cannot be fully eliminated, and are very dynamic and unpredictable. Cyber Security is about risk management …
S25
Opportunities, risks and policy implications — –  Background –  Selected policy issues – o Competition – o Data protection – o Liabilities – o Financial transactions…
S26
Subject matter — 90. To further address key supply chain risks and assist essential and important entities operating in sectors covered b…
S27
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — – **Infrastructure Sharing and Cooperative Models**: Multiple speakers advocated for shared computing infrastructure (re…
S28
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — This is what I want to say in research. When we use AI wisely, it can lead to more innovation, more inclusion, and great…
S29
AI adoption reshapes UK scale-up hiring policy framework — AI adoption is prompting UK scale-ups torecalibrateworkforce policies. Survey data indicates that 33% of founders antici…
S30
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — “First, people must be at the center of AI strategy, as we heard all along today”[107]. “Investment in skills, lifelong …
S31
TABLE OF CONTENTS — 1. The use of ICTs is leveraged to improve the lives of disadvantaged groups 2. Equality is achieved in access across ar…
S32
Contents — Despite the generally positive findings on the effects of digital technologies, there are obstacles to their adoptio…
S33
Keynote-Martin Schroeter — India’s Strategic Role and Policy Landscape
S34
Designing Indias Digital Future AI at the Core 6G at the Edge — India’s scale advantage and diverse datasets provide opportunities to reduce intelligence costs and train models for loc…
S35
Laying the foundations for AI governance — Lan Xue: Okay. I think my job is easier. I can say I agree with all of them. So I think that’s probably the easiest way….
S36
Why science metters in global AI governance — This comment demonstrated how different scientific assessments lead to fundamentally different policy responses. It show…
S37
GOVERNING AI FOR HUMANITY — – 120 Supported by the proposed AI office, the standards exchange would also benefit from strong ties to the internation…
S38
WS #283 AI Agents: Ensuring Responsible Deployment — Carter acknowledges that there is no consensus definition of what agentic AI is, and the technology is still emerging. T…
S39
Shaping the Future AI Strategies for Jobs and Economic Development — It requires risk tolerance. It requires capital that understands that building sovereign AI capacity involves experiment…
S40
Conversational AI in low income &amp; resource settings | IGF 2023 — Digital patient engagement is crucial for maintaining relationships with patients even after they leave the hospital. Pl…
S41
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — The discussion revealed a common theme across different contexts: the gap between policy ambition and implementation cap…
S42
WS #279 AI: Guardian for Critical Infrastructure in Developing World — Daniel Lohrman: I think they really lead into that really well. And I think that this is a huge challenge. I would ju…
S43
The mismatch between public fear of AI and its measured impact — Inmedicine and science, AI has shown promise in pattern recognition and data analysis. Deployment is cautious, as clinic…
S44
AI as critical infrastructure for continuity in public services — “We shouldn’t be fixing things after the fact, but we should go on an input before the deployment.”[115]. “The second on…
S45
MedTech and AI Innovations in Public Health Systems — Key barriers to scaling AI solutions included data quality issues, the need for integration within existing workflows ra…
S46
Democratizing AI Building Trustworthy Systems for Everyone — And so there are different in quotes, markets here at UL. People who can pay at different levels. Even within a country …
S47
Panel Discussion AI in Healthcare India AI Impact Summit — No, I think that’s true. So we have been talking to medical device companies who are now targeting new age diagnostic to…
S48
AI-assisted diagnostics expand across Europe — AI-powered diagnostics arebeing implemented across Europe, with France, Portugal, Hungary, Sweden and the Netherlands le…
S49
Enhancing rather than replacing humanity with AI — AI development is not some unstoppable force beyond our control. It’s shaped by developers, institutions, policymakers, …
S50
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Stephanie outlines six challenge areas (infrastructure, skills, risk profile, documentation, incentives, etc.) and argue…
S51
Overview of AI policy in 15 jurisdictions — Summary China remains a global leader in AI, driven by significant state investment, a vast tech ecosystem and abundant …
S52
EU Artificial Intelligence Act — (28) AI systems could have an adverse impact to health and safety of persons, in particular when such systems operate as…
S53
Who Watches the Watchers Building Trust in AI Governance — The insurance implications prove particularly significant. Just as other industries require various forms of assessment,…
S54
Building Indias Digital and Industrial Future with AI — So I think we are in a very good place. We have got very robust infrastructure. And how do we now navigate this world of…
S55
WSIS Action Line C7: E-health – Fostering foundations for digital health transformation in the age of AI — ## Background and Context Hani Eskandar: Yes. Okay. I’ll try to respond to some questions very quickly. There is alread…
S56
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — The Indian approach emphasizes the country’s young demographic advantage combined with strong moral and spiritual founda…
S57
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — – Amin Nasser- Julie Sweet – Julie Sweet- Amin Nasser Development | Economic Focus on outcomes and value creation rat…
S58
1 Introduction — This objective is aimed at providing appropriate conditions for developing public research and improving its quality . …
S59
Ad Hoc Consultation: Thursday 1st February, Morning session — In addition, Moldova’s agreement aligns with SDG 16, centred on ‘Peace, Justice and Strong Institutions’, and the need f…
S60
[Parliamentary Session 3] Researching at the frontier: Insights from the private sector in developing large-scale AI systems — While both speakers acknowledge the importance of governance, there’s an unexpected difference in their emphasis on who …
S61
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — The discussion revealed strong alignment between industry needs, academic capabilities, and government policy. David Fre…
S62
Keynote-Roy Jakobs — This comment introduces a systems-thinking perspective that acknowledges the complexity of AI implementation beyond just…
S63
WS #270 Understanding digital exclusion in AI era — An audience member raises the question of whether to wait for government to introduce AI policies or let the industry le…
S64
From India to the Global South_ Advancing Social Impact with AI — 60 ,000 crores is being put in our ITIs. So our ITIs are the grassroots organizations, government ITIs, there’s maybe mo…
S65
Panel Discussion AI in Healthcare India AI Impact Summit — High level of consensus with complementary perspectives rather than disagreements. The implications suggest that success…
S66
Keynote-Dario Amodei — “On the positive side, we have the potential to cure diseases that have been incurable for thousands of years, to radica…
S67
Global South at the heart of India AI plan — India hasunveiledthe New Delhi Frontier AI Impact Commitments, a new initiative aimed at promoting inclusive and respons…
S68
Building Scalable AI Through Global South Partnerships — I was just going to do one more thing, which is thank you, Shalini, and thank you to the panel for allowing us this smal…
S69
Opening keynote — This digital divide is emblematic of broader societal inequities, challenging us to consider the judgment of future gene…
S70
Building Trustworthy AI Foundations and Practical Pathways — Alright, I can take the clicker. So, I will keep it slightly brief and I’m going to skip over some slides in the interes…
S71
UK government makes bold move with AI tutoring trials for 450,000 pupils — The governmentplansto trialAItutoring tools in secondary schools, with nationwide availability targeted for the end of 2…
S72
BETWEEN — 3. Risk management shall be applied in such a manner that it does not create arbitrary or unjustifiable dis…
S73
NATIONAL CYBER SECURITY STRATEGY — Risk management is the process of identifying, assessing, and responding to risk and then determining an acceptable leve…
S74
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — “Sweden is a proud friend of India.”[21]. “Sweden intends to be a reliable and innovative partner as India continues its…
S75
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — This is what I want to say in research. When we use AI wisely, it can lead to more innovation, more inclusion, and great…
S76
Main Session 1: Global Access, Global Progress: Managing the Challenges of Global Digital Adoption — This discussion focused on the challenges and opportunities of global digital adoption, addressing both the need to conn…
S77
Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies — Education is crucial – “education, education, education” to address workforce concerns
S78
Powering the Technology Revolution / Davos 2025 — Workforce and Education Needs
S79
IGF Parliamentary track – Session 2 — Updated education is crucial to prepare the workforce for future technological challenges and opportunities.
S80
Contents — Despite the generally positive findings on the effects of digital technologies, there are obstacles to their adoptio…
S81
Keynote-Olivier Blum — -India’s strategic role in global energy innovation: India is positioned as a key hub for developing next-generation ene…
S82
Designing Indias Digital Future AI at the Core 6G at the Edge — India’s scale advantage and diverse datasets provide opportunities to reduce intelligence costs and train models for loc…
S83
The Global Power Shift India’s Rise in AI &amp; Semiconductors — But there are unique places where government can de -risk through public -private partnerships that would enable this ec…
S84
Defying Cognitive Atrophy in the Age of AI: A World Economic Forum Stakeholder Dialogue — The discussion began with a cautiously optimistic tone, acknowledging both opportunities and risks. However, the tone be…
S85
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — The tone was professional and forward-looking, with a sense of urgency about making AI work effectively in healthcare. W…
S86
WSIS Action Line C7: E-health – Fostering foundations for digital health transformation in the age of AI — The discussion maintained a professional, collaborative, and forward-looking tone throughout. It began with formal prese…
S87
Open Forum #30 High Level Review of AI Governance Including the Discussion — The discussion maintained a collaborative and constructive tone throughout, characterized by mutual respect and shared c…
S88
Skilling and Education in AI — The tone was cautiously optimistic throughout. Speakers acknowledged both the tremendous opportunities AI presents for I…
S89
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — The discussion maintained a consistently optimistic and collaborative tone throughout, characterized by mutual respect b…
S90
Empowering India &amp; the Global South Through AI Literacy — The discussion maintained an optimistic and collaborative tone throughout, with panelists sharing positive field experie…
S91
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — The tone is consistently optimistic, visionary, and inspirational throughout. The speaker maintains an enthusiastic and …
S92
Session — The tone was primarily analytical and forward-looking, with the speaker presenting evidence-based predictions while ackn…
S93
WS #187 Bridging Internet AI Governance From Theory to Practice — The discussion maintained a thoughtful but increasingly cautious tone throughout. It began optimistically, with speakers…
S94
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S95
Pathways to De-escalation — The overall tone was serious and somewhat cautious, reflecting the gravity of cybersecurity challenges. While the speake…
S96
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S97
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S98
WS #462 Bridging the Compute Divide a Global Alliance for AI — The discussion maintained a constructive and collaborative tone throughout, with participants building on each other’s i…
S99
Setting the Scene  — The tone is professional, informative, and collaborative throughout. Kent Bressie maintains an educational approach whil…
S100
Fireside Conversation: 01 — The conversation maintained a forward-looking, solution-oriented tone throughout, focusing on practical pathways rather …
S101
How Trust and Safety Drive Innovation and Sustainable Growth — The discussion concluded with panelists predicting what AI summits might be called in five years’ time. Their responses …
S102
Science as a Growth Engine: Navigating the Funding and Translation Challenge — And that can also, then, decrease the industries wanting to invest if the hurdle of an extra three or five years of regu…
S103
Comprehensive Report: “Converging with Technology to Win” Panel Discussion — The discussion began with an optimistic, exploratory tone as panelists shared different models and success stories. The …
S104
How AI Drives Innovation and Economic Growth — The discussion successfully balanced optimistic potential with realistic assessment of implementation challenges. Unlike…
S105
Keynote Adresses at India AI Impact Summit 2026 — -S. Krishnan- Secretary (India) And we’re doing it in a partnership with the world’s largest democracy, a nation of 1 ….
S106
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — Chairman, member, Mr. Mitter, distinguished delegates, my fellow panelists, I welcome everyone to this second session. T…
S107
Keynote by Sangita Reddy Joint Managing Director Apollo Hospitals India AI Impact Summit — Dr. Reddy challenged conventional thinking by reframing India’s healthcare challenges as competitive advantages. “Health…
S108
Technology in the World / Davos 2025 — – Dario Amodei: CEO of Anthropic – Marc Benioff: CEO of Salesforce Nicholas Thompson: We have one of the most excitin…
S109
Trade Beyond COVID-19: Building Resilience — Mr Paul Polman(Former CEO, Unilever, and Vice-Chair, UN Global Impact) stated that trade should rather solve poverty tha…
S110
Anthropic introduces a safety feature allowing Claude AI to terminate harmful conversations — Anthropichas announcedthat its Claude Opus 4 and 4.1 models can now end conversations in extreme cases of harmful or abu…
S111
Claude AI will remain ad-free to preserve user trust and deep reasoning — Anthropic’s official announcementemphasisesthat Claude will not carry advertising or ad-influenced content within conver…
S112
Claude can now read your Gmail and Docs — Anthropic hasintroduceda new integration that allows its AI chatbot, Claude, to connect directly with Google Workspace. …
S113
Leaders TalkX: Looking Ahead: Emerging tech for building sustainable futures — Dr. Pol Vandenbroucke:Thank you very much for the great question. I first of all would like to thank the conference orga…
S114
Claude Opus 4 sets a benchmark in AI coding as Anthropic’s revenue doubles — Anthropic hasreleased Claude Opus4and Claude Sonnet 4, its most advanced AI models to date. The launch comes amid rapid …
S115
Google and Anthropic announce partnership for AI safety — Google and Anthropic have announced anexpanded partnershipto achieve the highest standards of AI safety. Anthropic will …
S116
Anthropic unveils Claude Life Sciences to transform research efficiency — Anthropichas unveiledClaude for Life Sciences, its first major launch in the biotechnology sector. The new platform inte…
S117
Folding Science / DAVOS 2025 — Alison Snyder: OK. My last question is, so again, circling back to where we were, some tech leaders have talked a lo…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
C
Chris Ciauri
7 arguments148 words per minute2076 words837 seconds
Argument 1
Reduce admin burden & improve access (Chris Ciauri)
EXPLANATION
Anthropic sees a huge opportunity to use AI to cut down the administrative workload of clinicians and to expand healthcare access, especially in low‑resource settings. By automating paperwork and summarising clinical information, doctors can spend more time on patient care, while AI‑driven tools can help reach underserved populations.
EVIDENCE
Chris highlighted that in the United States only about 30 % of a clinician’s time is spent on patient care, the rest being paperwork, and that AI could reduce this administrative load dramatically [36-38]. He noted that in India the average primary-care visit lasts only two minutes, indicating a severe access challenge that AI could help alleviate [38-41]. He gave a concrete example of Banner Health using Anthropic’s model to summarise 100-page oncology reports, cutting eight hours of clinician time down to a concise summary, thereby freeing clinicians to focus on care decisions [124-127].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panel discussion highlighted administrative paperwork as a pervasive burden and cited Anthropic’s tools reducing clinician paperwork; governance initiatives also aim to cut admin load [S1][S5].
MAJOR DISCUSSION POINT
Admin burden reduction and access improvement
AGREED WITH
Dr. Sabine Kapasi
Argument 2
India’s digital health record & cloud adoption as AI foundation (Chris Ciauri)
EXPLANATION
Chris argues that India’s already‑established digital health record infrastructure and its rapid cloud adoption create a fertile ground for AI deployment in healthcare. The existing digital ecosystem can serve as a strong data foundation for AI models to improve outcomes at scale.
EVIDENCE
He described India’s digital healthcare system as “the envy of the world,” providing a solid platform for AI applications [45-46]. He also cited statistics showing India’s leading position in cloud adoption outside the U.S., being second globally and the fastest-growing, with Anthropic’s revenue from India doubling in four months [186-190].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s advanced digital health record system and rapid cloud adoption are noted as a strong AI foundation, with discussions on leveraging the digital stack for health and ensuring trusted, interoperable infrastructure [S6][S7][S1].
MAJOR DISCUSSION POINT
Digital health records and cloud as AI enablers
AGREED WITH
Dr. Aditya Yad
Argument 3
Anthropic’s Bengaluru office to co‑build solutions with local teams (Chris Ciauri)
EXPLANATION
Anthropic has opened an office in Bengaluru to work directly on the ground with Indian partners, ensuring solutions are tailored to local needs and contexts. This local presence is intended to accelerate co‑creation of AI‑driven healthcare tools.
EVIDENCE
Chris announced that Anthropic recently launched operations in Bengaluru to address Indian healthcare problems by building solutions locally [47-49].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Anthropic announced a Bengaluru office to work locally with Indian partners, confirming the commitment to co-develop solutions in India [S9][S10].
MAJOR DISCUSSION POINT
Local office for co‑development
Argument 4
Claude’s “I don’t know” safety design to avoid confident errors (Chris Ciauri)
EXPLANATION
Anthropic designs its Claude models to express uncertainty (“I don’t know”) rather than giving confident but potentially wrong answers, which is crucial for high‑risk domains like healthcare. This safety‑first approach aims to prevent harmful misinformation.
EVIDENCE
He explained that Claude is trained to use language such as “I don’t know” and “I’m not certain,” which he said is critical in healthcare where stakes are high [120-122][128-130].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Claude is designed to express uncertainty with phrases like “I don’t know,” a safety feature emphasized in the summit and discussed in safety studies [S1][S11].
MAJOR DISCUSSION POINT
Safety‑first uncertainty handling
Argument 5
Accelerate drug development cycles from weeks to hours (Chris Ciauri)
EXPLANATION
Anthropic’s AI tools can dramatically shorten drug discovery timelines, turning multi‑week processes into hour‑long tasks, thereby speeding the delivery of new medicines to patients.
EVIDENCE
Chris cited work with customers such as Novo Nordisk and Sanofi where AI reduced drug development cycle times from eight weeks to eight hours, representing a “phenomenal difference” in speed to market [136-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Anthropic’s AI reduced drug development cycles from eight weeks to eight hours, aligning with broader reports of AI accelerating drug discovery [S1][S13].
MAJOR DISCUSSION POINT
Speeding drug discovery
Argument 6
Ongoing release of larger, safer Claude models with exponential capability (Chris Ciauri)
EXPLANATION
Anthropic releases a new Claude model roughly every two and a half months, each iteration markedly more powerful and safer than the previous one, ensuring continuous improvement in AI performance for healthcare use cases.
EVIDENCE
He noted that Anthropic has been releasing new models at a rate of every 2.5 months, each “exponentially more intelligent, more powerful than the last one, and safe” [270-272].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
New Claude models are released roughly every 2.5 months, each markedly more capable and safer than the previous version [S1].
MAJOR DISCUSSION POINT
Rapid model iteration and safety
Argument 7
Growth of small, language‑specific models for edge use cases, especially in India (Chris Ciauri)
EXPLANATION
Chris foresees a complementary ecosystem where smaller, targeted language models address niche, edge‑case applications, particularly in multilingual contexts like India, alongside the larger Claude models.
EVIDENCE
He stated that smaller, targeted use-case models will have a place, especially for edge applications, and that open-source models could serve these needs, while Anthropic continues to scale Claude for broader use [278-284].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion noted a role for edge-case, smaller language models, especially in multilingual contexts like India, and highlighted India’s focus on such models [S1][S17].
MAJOR DISCUSSION POINT
Edge models for localized needs
D
Dr. Sabine Kapasi
6 arguments152 words per minute2766 words1089 seconds
Argument 1
Need to pinpoint near‑term ROI for LMICs (Dr. Sabine Kapasi)
EXPLANATION
Sabine stresses that, given low digital adoption in many low‑ and middle‑income countries, it is essential to identify AI solutions that will deliver the greatest return on investment within the next three to five years.
EVIDENCE
In her opening remarks she highlighted the need to determine where AI solutions have the largest ROI and opportunity over the next 3-5 years for low- and middle-income countries [13-15].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The summit stressed the importance of identifying AI solutions with the highest ROI for low- and middle-income countries within the next 3-5 years [S1].
MAJOR DISCUSSION POINT
Identifying near‑term ROI
Argument 2
AI as preparation tool; clinicians retain final judgment (Dr. Sabine Kapasi)
EXPLANATION
Sabine frames AI as a preparatory aid that can provide information and suggestions, while the ultimate clinical decision must remain with human clinicians, preserving professional judgment and patient safety.
EVIDENCE
She asked how to position AI as a preparation tool and emphasized that clinicians are responsible for judgment, stating “AI is for preparation. Clinicians are for judgment” [232-236].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A counterpoint raised concerns that human agency may diminish as AI systems influence decisions, warning that clinicians could lose meaningful influence despite formal accountability [S19].
MAJOR DISCUSSION POINT
AI as support, not decision‑maker
Argument 3
Workflow automation to cut costs and boost system efficiency (Dr. Sabine Kapasi)
EXPLANATION
Sabine points out that automating administrative workflows can lower costs, improve efficiency, and free up clinician time, which is especially valuable in resource‑constrained health systems.
EVIDENCE
She referenced the need to identify near-term AI opportunities that improve ROI and mentioned that reducing administrative burden could address a $1 trillion problem, linking workflow automation to cost reduction and system efficiency [13-15][173-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Reducing administrative paperwork through AI was highlighted as a way to lower costs and improve system efficiency, echoing broader calls to streamline admin burdens [S1][S5].
MAJOR DISCUSSION POINT
Automation for cost and efficiency
AGREED WITH
Chris Ciauri
Argument 4
Clear training on AI limits; emphasize “I don’t know” responses (Dr. Sabine Kapasi)
EXPLANATION
Sabine argues that healthcare workers must be explicitly trained on the limits of AI, especially the model’s uncertainty responses, to ensure safe adoption and prevent over‑reliance on AI outputs.
EVIDENCE
She raised questions about how to train the healthcare workforce to understand AI limits and to emphasize “I don’t know” responses, noting the need for clear communication about AI uncertainty [221-227].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Claude’s uncertainty responses were highlighted, and training on AI limits was recommended to ensure safe adoption [S1][S11].
MAJOR DISCUSSION POINT
Training on AI uncertainty
AGREED WITH
Chris Ciauri
Argument 5
AI can raise diagnostic accuracy while reducing test costs (Dr. Sabine Kapasi)
EXPLANATION
Sabine suggests that AI‑enhanced diagnostics can improve accuracy and lower the price of tests, making screening and preventive care more affordable and widely accessible.
EVIDENCE
She discussed how rapid advances in diagnostic technology enable home-based testing, dramatically cutting testing costs and expanding screening reach, and noted that insurers often reject higher-cost diagnostics despite potential savings from avoided expensive treatments [173-176][210-212].
MAJOR DISCUSSION POINT
Improved, cheaper diagnostics
Argument 6
Swiss research strength combined with Indian implementation to create tailored AI tools (Dr. Sabine Kapasi)
EXPLANATION
Sabine highlights the complementary strengths of Switzerland’s deep biotech research heritage and India’s rapid digital health adoption, proposing that their collaboration could produce AI tools uniquely suited to global health challenges.
EVIDENCE
She contrasted Switzerland’s legacy of biotech research with India’s fast-growth, high-adoption digital health ecosystem, noting that together they could develop tailored AI solutions for health care [70-72][193-194].
MAJOR DISCUSSION POINT
Switzerland‑India synergy
D
Dr. Aditya Yad
5 arguments174 words per minute1469 words503 seconds
Argument 1
Switzerland‑India free‑trade agreement & $100 bn health investment (Dr. Aditya Yad)
EXPLANATION
Aditya outlines the recent Switzerland‑India free‑trade agreement, which includes a commitment of $100 billion in investments across sectors, including healthcare, and aims to create one million jobs in India over the next 15 years.
EVIDENCE
He explained that the free-trade agreement between Switzerland (and the EFTA countries) and India includes a $100 billion investment commitment covering various sectors, health care among them, and a target of creating one million direct jobs in India over 15 years [84-89].
MAJOR DISCUSSION POINT
Trade agreement and health investment
Argument 2
Building public trust in medical data use and governance (Dr. Aditya Yad)
EXPLANATION
Aditya stresses that public confidence in how personal medical data is handled is essential for AI adoption; ongoing awareness‑building and transparent data governance are required to earn that trust.
EVIDENCE
He noted that trust around personal and medical data remains a debate, emphasizing the need for awareness-building and demonstrating clear benefits to gain public confidence in data use [293-296].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussions emphasized the need for transparent data governance and public trust in health data handling, noting the balance of individual and collective rights [S8][S5].
MAJOR DISCUSSION POINT
Data trust and governance
Argument 3
AI‑enabled biomanufacturing improves yields and lowers costs (Dr. Aditya Yad)
EXPLANATION
Aditya describes how integrating AI into biomanufacturing can optimise process parameters, increase yields, and reduce production costs, making high‑quality biologics more affordable.
EVIDENCE
He explained that AI tools can monitor and self-learn optimal parameters in controlled biomanufacturing environments, improving yields and lowering costs, and cited examples of large firms (Novartis, Roche, Lonza) and smaller bioreactors that benefit from AI-driven optimisation [166-170][168-170].
MAJOR DISCUSSION POINT
AI in biomanufacturing
AGREED WITH
Chris Ciauri
Argument 4
Government program to train SME CEOs on embedding AI from inception (Dr. Aditya Yad)
EXPLANATION
Aditya details a state‑run initiative that brings together CEOs of small and medium enterprises to educate them on integrating AI from the earliest stages of product development, aiming to overcome resistance and create homogeneous AI adoption across the sector.
EVIDENCE
He described a program launched a year ago that convenes cohorts of companies, many funded by the state, to train CEOs on strategic AI integration from the start, addressing resistance and ensuring consistent AI application across SMEs [250-255].
MAJOR DISCUSSION POINT
CEO training program
Argument 5
Demonstrating cost‑effectiveness to insurers is key for adoption (Dr. Aditya Yad)
EXPLANATION
Aditya points out that insurers are reluctant to reimburse more expensive diagnostic tools unless clear cost‑benefit evidence shows they prevent far higher treatment expenses, making economic justification essential for widespread adoption.
EVIDENCE
He explained that insurers currently reject diagnostics that cost two to four times more than existing tests because they cannot be convinced of cost-effectiveness relative to expensive therapies such as $20 k-$50 k cancer treatments, highlighting the need for a compelling economic narrative [209-212].
MAJOR DISCUSSION POINT
Economic case for diagnostics
Agreements
Agreement Points
Reducing administrative burden and improving clinician time
Speakers: Chris Ciauri, Dr. Sabine Kapasi
Reduce admin burden & improve access (Chris Ciauri) Workflow automation to cut costs and boost system efficiency (Dr. Sabine Kapasi)
Both speakers highlight that AI can dramatically cut paperwork and administrative tasks, freeing clinicians to focus on patient care and lowering system costs [36-38][124-127][13-15][173-176].
POLICY CONTEXT (KNOWLEDGE BASE)
The need to streamline clinical workflows and cut admin time is highlighted in WHO guidance on AI for health and in digital patient engagement platforms such as WhatsApp, which aim to maintain clinician-patient relationships while reducing paperwork [S40]. The World Economic Forum notes that AI can relieve nurses of 10-15 minutes of admin per hour, and policy discussions emphasize simplifying administrative processes as a priority for health systems [S57][S59].
AI safety through uncertainty handling and need for clear training
Speakers: Chris Ciauri, Dr. Sabine Kapasi
Claude’s “I don’t know” safety design (Chris Ciauri) Clear training on AI limits; emphasize “I don’t know” responses (Dr. Sabine Kapasi)
Both agree that AI models must express uncertainty (e.g., “I don’t know”) and that healthcare workers need explicit training on these limits to ensure safe deployment [120-122][128-130][221-227].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy frameworks stress managing scientific and definitional uncertainty to avoid policy gaps; this is reflected in analyses of how scientific uncertainty translates to policy uncertainty (e.g., employment impacts) and calls for clear training and oversight mechanisms [S36][S38]. The EU AI Act specifically flags health-related safety risks and mandates robust risk management, while WHO’s ethics and governance guidance calls for transparent training and safety standards for AI in health [S52][S55].
India’s digital health record system and cloud adoption provide a strong AI foundation
Speakers: Chris Ciauri, Dr. Aditya Yad
India’s digital health record & cloud adoption as AI foundation (Chris Ciauri) India’s high smartphone and digital adoption rates (Dr. Aditya Yad)
Both point to India’s advanced digital health infrastructure and rapid cloud/smartphone adoption as a fertile base for scaling AI-driven healthcare solutions [45-46][186-190][182-184].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s national AI strategy emphasizes a robust digital and cloud infrastructure as a springboard for AI in health, noting the country’s mature e-health record ecosystem and industrial AI push [S54]. Complementary discussions on building a next-gen AI-enabled workforce underline the importance of cloud-based data platforms for scaling health AI solutions [S61].
AI can accelerate drug development and biomanufacturing, lowering costs and time‑to‑market
Speakers: Chris Ciauri, Dr. Aditya Yad
Accelerate drug development cycles (Chris Ciauri) AI‑enabled biomanufacturing improves yields and lowers costs (Dr. Aditya Yad)
Both see AI as a catalyst for faster drug discovery and more efficient biomanufacturing, shortening development cycles from weeks to hours and improving yields while reducing costs [136-138][166-170].
Capacity building and local collaboration are essential for AI adoption in healthcare
Speakers: Chris Ciauri, Dr. Aditya Yad, Dr. Sabine Kapasi
Anthropic’s Bengaluru office to co‑build solutions (Chris Ciauri) Government program to train SME CEOs on AI (Dr. Aditya Yad) Workforce enablement for AI adoption (Dr. Sabine Kapasi)
All three stress the need for on-the-ground capacity development-through local offices, state-run CEO training, and broader workforce enablement-to ensure AI tools are effectively integrated into health systems [47-49][278-284][250-255][259-262].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple sources underline the need for sovereign AI capacity, interdisciplinary skill development, and collaboration between public, private, and civil-society actors to translate policy ambition into implementation (e.g., risk-tolerant sovereign AI programmes, capacity-building gaps, and democratizing AI to avoid digital divides) [S39][S41][S46][S56][S61].
Building public trust in medical data is prerequisite for AI deployment
Speakers: Dr. Sabine Kapasi, Dr. Aditya Yad
AI as preparation tool; need training (Dr. Sabine Kapasi) Building public trust in medical data (Dr. Aditya Yad)
Both emphasize that gaining public confidence in how personal health data is used, through transparent governance and education, is critical for scaling AI solutions in healthcare [232-236][221-227][293-296].
POLICY CONTEXT (KNOWLEDGE BASE)
Trust is repeatedly cited as a barrier; public fear often exceeds measured impact, prompting calls for independent oversight bodies, transparent data governance, and trust-building measures in public health systems and AI assurance frameworks [S43][S44][S45][S53].
Similar Viewpoints
Both view India’s digital ecosystem—robust health records, cloud services, and widespread smartphone penetration—as a strategic platform for deploying AI‑driven health solutions [45-46][186-190][182-184].
Speakers: Chris Ciauri, Dr. Aditya Yad
India’s digital health record & cloud adoption as AI foundation (Chris Ciauri) India’s high smartphone and digital adoption rates (Dr. Aditya Yad)
Both agree that safety in healthcare AI hinges on models expressing uncertainty and that clinicians must be trained to interpret these signals appropriately [120-122][128-130][221-227].
Speakers: Chris Ciauri, Dr. Sabine Kapasi
Claude’s “I don’t know” safety design (Chris Ciauri) Clear training on AI limits; emphasize “I don’t know” responses (Dr. Sabine Kapasi)
Both see AI as a transformative force across the pharmaceutical pipeline—from discovery to manufacturing—delivering speed and cost efficiencies [136-138][166-170].
Speakers: Chris Ciauri, Dr. Aditya Yad
Accelerate drug development cycles (Chris Ciauri) AI‑enabled biomanufacturing improves yields and lowers costs (Dr. Aditya Yad)
Unexpected Consensus
AI should never replace clinician judgment despite being a powerful tool
Speakers: Chris Ciauri, Dr. Sabine Kapasi
Claude’s “I don’t know” safety design (Chris Ciauri) AI as preparation tool; clinicians retain final judgment (Dr. Sabine Kapasi)
A technology executive and a practicing clinician converge on the principle that AI must remain a support system, never a decision-maker, highlighting a rare cross-disciplinary alignment on safety and professional autonomy [120-122][128-130][232-236].
POLICY CONTEXT (KNOWLEDGE BASE)
Guidelines from WHO and broader AI governance discussions stress that AI must augment, not replace, clinicians, emphasizing clinical responsibility and the need for human oversight in medical decision-making [S43][S49][S55].
Economic justification for AI‑enhanced diagnostics must be demonstrated to insurers
Speakers: Dr. Sabine Kapasi, Dr. Aditya Yad
AI can raise diagnostic accuracy while reducing test costs (Dr. Sabine Kapasi) Demonstrating cost‑effectiveness to insurers is key for adoption (Dr. Aditya Yad)
Both recognize that without clear cost-benefit evidence, insurers will resist reimbursing newer AI-driven diagnostic tools, linking clinical innovation to health-finance dynamics in an unexpected convergence of clinical and policy perspectives [210-212][293-296].
POLICY CONTEXT (KNOWLEDGE BASE)
Adoption of AI diagnostics is linked to reimbursement and insurance models; panels highlight the necessity of insurance mechanisms, verification processes, and clear cost-benefit evidence to secure insurer coverage for AI-driven diagnostic tools [S47][S50][S53].
Overall Assessment

The panel shows strong convergence on four core themes: (1) AI should reduce administrative load and improve clinician efficiency; (2) safety must be built‑in via uncertainty handling and rigorous training; (3) India’s digital ecosystem is a strategic launchpad for AI in health; (4) capacity building, local collaboration, and public trust are essential for scaling AI solutions. These agreements span AI safety, digital infrastructure, economic impact, and governance.

High consensus across technical, clinical, and policy dimensions, indicating a shared vision that AI can be responsibly deployed in healthcare if supported by robust digital foundations, safety‑first design, capacity development, and transparent data governance. This alignment suggests momentum for coordinated actions among industry, academia, and governments to advance AI‑enabled health systems in LMICs.

Differences
Different Viewpoints
Who should lead AI adoption in Indian healthcare – a private AI firm co‑creating solutions on the ground versus industry/government‑driven CEO training and broader sector leadership
Speakers: Chris Ciauri, Dr. Aditya Yad
Anthropic’s Bengaluru office will co‑build solutions locally with Indian partners The industry needs to take charge and a state‑run program will train SME CEOs to embed AI from inception
Chris emphasizes Anthropic’s direct local presence as the engine for AI solution development in India [47-49], while Aditya stresses that the broader industry, supported by a government-run CEO training programme, must lead AI integration and that without such sector-wide leadership adoption will be fragmented [257-258].
POLICY CONTEXT (KNOWLEDGE BASE)
The debate mirrors ongoing discussions about private-sector-government collaboration versus parliamentary or public-sector leadership in AI rollout, with examples of private firms partnering with ministries and calls for stronger governmental stewardship of AI initiatives [S42][S60][S63][S39].
Confidence in the speed of AI‑driven transformation – rapid, exponential impact versus uncertainty about near‑term outcomes
Speakers: Chris Ciauri, Dr. Sabine Kapasi
AI technology will keep improving rapidly, with new Claude models every 2.5 months, delivering hard‑to‑imagine benefits soon It is not possible to predict AI impact ten years out
Chris expresses strong optimism that AI will transform healthcare quickly, citing frequent model releases and fast-moving benefits [270-273], whereas Sabine cautions that a ten-year horizon cannot be forecasted, indicating a more measured view of AI’s timeline [162-163].
POLICY CONTEXT (KNOWLEDGE BASE)
Governance literature notes a pacing problem where technology outstrips policy, creating uncertainty about near-term impacts; this tension is reflected in analyses of scientific uncertainty translating to policy uncertainty and the lack of consensus on agentic AI definitions [S35][S36][S38].
Priority of AI use‑cases – administrative workload reduction versus diagnostic cost reduction and insurance adoption
Speakers: Chris Ciauri, Dr. Sabine Kapasi
Reducing clinicians’ administrative burden is a $1 trillion problem and the biggest near‑term ROI Making diagnostics cheaper and proving cost‑effectiveness to insurers is essential for scaling screening and preventive care
Chris highlights admin-burden reduction as the most lucrative near-term opportunity, quantifying it as a trillion-dollar problem [132-135], while Sabine stresses that affordable, AI-enhanced diagnostics are needed to convince insurers and expand preventive screening [173-176][210-212].
POLICY CONTEXT (KNOWLEDGE BASE)
Stakeholders highlight competing priorities: AI for admin relief (e.g., reducing nurse paperwork) versus AI for diagnostic efficiency and insurer uptake, as seen in European diagnostic deployments and Indian diagnostic adoption challenges [S57][S48][S47].
Unexpected Differences
Which country’s innovation ecosystem is more vibrant – Switzerland or India
Speakers: Dr. Sabine Kapasi, Dr. Aditya Yad
India is far more vibrant Switzerland’s biotech legacy is highlighted as a strength
During a light-hearted exchange, Sabine claims India is “far more vibrant” and suggests a debate, while Aditya emphasizes Switzerland’s long-standing biotech research ecosystem, an unexpected point of contention unrelated to AI technicalities [63-65][70-72].
Overall Assessment

The panel shows broad consensus that AI can benefit healthcare, but key disagreements revolve around who should drive adoption, how quickly transformative impacts will materialise, and which use‑cases should be prioritised first. These divergences reflect differing perspectives on governance (private‑sector versus industry/government leadership), risk tolerance regarding timelines, and strategic focus (administrative efficiency versus diagnostic affordability).

Moderate – while participants share common goals of safer, effective AI in health, the differing views on leadership, pacing, and priority use‑cases could affect coordination and policy design, requiring clear frameworks to align private innovation with public sector capacity building.

Partial Agreements
Both agree that AI must provide information while clinicians make the ultimate decisions; Chris notes Claude will never be a doctor and will express uncertainty, and Sabine explicitly frames AI as preparation with clinicians for judgment [236-240][232-236].
Speakers: Chris Ciauri, Dr. Sabine Kapasi
AI should serve as a preparation tool, not a decision‑maker Clinicians retain final judgment
Chris describes training Claude on 12 Indic languages to address multilingual barriers [141-143], and Sabine asks about the role of LLMs versus smaller models for India, indicating shared recognition of the need for language‑specific solutions [274-277].
Speakers: Chris Ciauri, Dr. Sabine Kapasi
Need for localized, multilingual models for India Importance of small, language‑specific edge models
Takeaways
Key takeaways
AI can dramatically reduce administrative burden for clinicians and improve patient‑care time, especially in the US and India. India’s extensive digital health‑record infrastructure and rapid cloud adoption provide a strong foundation for deploying AI at scale in the Global South. The Switzerland‑India free‑trade agreement and a $100 bn investment commitment create a strategic partnership for health‑tech and AI collaboration. Safety is a non‑negotiable priority; Anthropic’s Claude model is designed to say “I don’t know” to avoid confident but incorrect answers. Large‑language models (LLMs) can accelerate drug discovery, shorten clinical‑trial cycles, and enable AI‑driven biomanufacturing that improves yields and lowers costs. Workforce enablement is essential: clinicians must be trained to treat AI as a preparatory tool, while CEOs and SMEs need guidance to embed AI from inception. Multilingual AI (trained on 12 Indic languages) is critical for expanding access and diagnostic support across India’s diverse linguistic landscape. Building public trust around the use of personal medical data is a prerequisite for broader AI adoption. Future AI ecosystems will include both ever‑more capable, safety‑focused large models (e.g., Claude) and smaller, language‑specific models for edge use cases.
Resolutions and action items
Anthropic opened a Bengaluru office to co‑develop AI solutions with Indian partners. Anthropic committed to training its models on multiple Indic languages to address multilingual barriers. The Swiss state government (via Dr. Aditya Yad) launched a program to train SME CEOs on integrating AI from the start of product development. Both Anthropic and the Swiss‑Indian partnership agreed to keep patient data out of model training and to maintain strict safety safeguards. Commitment to continue developing safety‑first AI (Claude) that can defer judgment (“I don’t know”) in clinical contexts.
Unresolved issues
Specific pathways for convincing insurers and payers to reimburse AI‑enhanced diagnostics that may be costlier upfront but cheaper downstream. Scalable strategies for educating and upskilling the 40,000+ SMEs in Indian states on AI adoption and governance. Detailed regulatory frameworks and standards for AI use in clinical decision support, especially concerning liability and accountability. Mechanisms for ensuring consistent, high‑quality AI deployment across diverse Indian languages and dialects beyond the initial 12 languages. How to balance AI‑driven automation with the need for human clinical judgment without creating over‑reliance or under‑use.
Suggested compromises
Position AI as a preparation and workflow‑automation tool, while explicitly reserving final clinical judgment for human clinicians. Adopt a clear policy that AI models will never be trained on patient data, addressing privacy and trust concerns. Combine large, general‑purpose models (Claude) with smaller, language‑specific models for edge cases, allowing both broad capability and localized relevance. Use government‑led training programs for CEOs and SMEs to create a uniform baseline of AI understanding, reducing fragmented or inconsistent implementations.
Thought Provoking Comments
AI can do a lot of good. It also can create a lot of harm if done carelessly. Anthropic was founded with a mission around safety, and we focus a lot on that. So we like the tension between capability of AI models but also making sure that the safety is right so that we can deliver on some of the opportunities.
Sets safety as the foundational lens for any healthcare AI work, reminding the group that technical capability must be balanced with ethical responsibility.
This comment reframed the conversation from pure opportunity‑seeking to risk‑aware innovation. It prompted the subsequent deep dive into how Anthropic builds ‘I don’t know’ responses, how they protect patient data, and led other speakers (e.g., Dr. Kapasi and Dr. Aditya) to raise questions about trust, regulation, and workforce training.
Speaker: Chris Ciauri
In the U.S. only 30 % of a clinician’s time is spent on patient care; the rest is paperwork and administrative tasks. In India the biggest challenge is access – average primary‑care visits last two minutes. AI can decrease paperwork, reduce administrative burden and make health‑care more broadly accessible.
Draws a clear, data‑backed contrast between two major health‑system pain points (administrative overload vs. access) and positions AI as a lever for both, expanding the scope of discussion beyond a single geography.
Shifted the dialogue toward concrete use‑cases (administrative automation, multilingual support) and encouraged Dr. Kapasi to ask about specific AI applications. It also set up the later discussion on multilingual models and the need for AI that works in India’s diverse language environment.
Speaker: Chris Ciauri
Switzerland has been ranked number one in the Global Innovation Index for 15 years, driven largely by biotech and pharma, yet its domestic market is only 9 million people. The new India‑Switzerland free‑trade agreement includes a $100 billion investment commitment and 1 million jobs in India, making AI‑driven cost reduction in drug discovery a strategic priority for both countries.
Links macro‑economic policy, cross‑border investment, and AI‑enabled cost efficiencies, highlighting how national agreements can accelerate health‑tech adoption in LMICs.
Introduced the theme of international collaboration and financing, prompting Chris to discuss how AI can accelerate drug development and Dr. Kapasi to explore how such partnerships could scale diagnostic and therapeutic innovations across the Global South.
Speaker: Dr. Aditya Yad
We have trained Claude on 12 Indic languages – and we are continuing to add more dialects – because multilingual capability is a key barrier to AI‑driven health‑care access in India.
Identifies a concrete technical hurdle (language diversity) and demonstrates a tangible step Anthropic has taken, moving the conversation from abstract opportunity to actionable product development.
Prompted Dr. Kapasi to highlight India’s high smartphone penetration and multilingual challenges, and reinforced the narrative that AI can be localized to serve the Global South, shaping the later discussion on scaling and adoption.
Speaker: Chris Ciauri
Screening and diagnostics are shifting from treatment‑centric models to prevention, but the economics are tricky – people often won’t pay for a test when they don’t feel sick. We need to rethink how to finance and incentivize preventive AI‑enabled diagnostics.
Raises a systemic, market‑based obstacle that goes beyond technology, questioning how AI‑driven preventive tools can achieve sustainable adoption in low‑ and middle‑income contexts.
Steered the conversation toward health‑system financing, leading Dr. Aditya to discuss trust in data and insurance models, and Chris to emphasize the role of AI as a preparatory tool rather than a decision‑maker, deepening the analysis of implementation challenges.
Speaker: Dr. Sabine Kapasi
AI is for preparation. Clinicians are for judgment. Claude will say ‘I don’t know’ when uncertain and will never use patient data to train our models. Those are non‑negotiables for health‑care adoption.
Provides a clear operational principle that separates AI assistance from clinical authority and addresses privacy concerns, directly tackling the safety and trust issues raised earlier.
Reassured the panel about ethical safeguards, influencing Dr. Kapasi’s follow‑up on workforce training and Dr. Aditya’s emphasis on data‑trust. It also anchored the later discussion on education and the need for clear boundaries between AI outputs and clinician decisions.
Speaker: Chris Ciauri
We launched a state‑government program that brings CEOs of SMEs together to train them on embedding AI from day one. The challenge is convincing 40,000 companies to adopt AI, otherwise adoption will be fragmented.
Highlights a practical, ecosystem‑level strategy for AI diffusion, moving the conversation from high‑level policy to on‑the‑ground capacity building.
Introduced the theme of scaling AI adoption through leadership development, prompting Chris to acknowledge the need for both large‑scale models and smaller, edge‑focused solutions, and reinforcing the discussion about workforce enablement.
Speaker: Dr. Aditya Yad
In the next five years we’ll see a mix: large, safe models like Claude for broad, high‑impact use cases, and smaller, targeted language models for edge applications, especially in multilingual contexts.
Projects a nuanced future model ecosystem, balancing the power of frontier models with the practicality of lightweight, domain‑specific solutions.
Shifted the dialogue toward technical road‑mapping, leading Dr. Kapasi to ask how countries like India could play a role, and setting up expectations for diversified AI deployment strategies in LMICs.
Speaker: Chris Ciauri
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved the conversation from a high‑level optimism about AI’s potential to a nuanced, implementation‑focused dialogue. Chris’s safety‑first framing and his contrast of US administrative burdens versus Indian access challenges opened the floor to concrete use‑cases. Dr. Aditya’s articulation of the India‑Switzerland partnership and the state‑run AI leadership program injected policy and ecosystem‑scale perspectives. Dr. Kapasi’s focus on preventive diagnostics and financing highlighted systemic barriers beyond technology. Each of these comments triggered deeper exploration of risk management, multilingual model development, workforce training, and the future mix of large and small AI models. Collectively, they transformed the panel from a speculative overview into a strategic roadmap for AI adoption in healthcare across both high‑income and low‑ and middle‑income settings.

Follow-up Questions
How can the healthcare workforce be trained to adopt AI tools while ensuring they do not act directly on LLM-generated advice, and how can the broader ecosystem be educated to promote preventive screening before patients feel a need?
Ensuring safe AI integration requires clear guidelines and training for clinicians and patients; without this, misuse could cause harm, especially in preventive care.
Speaker: Dr. Sabine Kapasi
What strategies are needed to build public trust in the handling of personal medical data for AI applications, and how can transparency and consent mechanisms be improved?
Trust is essential for data sharing; without public confidence, AI systems cannot access the data needed for effective healthcare solutions.
Speaker: Dr. Aditya Yad
How can AI adoption be scaled among the large number of SMEs (e.g., 40,000 companies in a state) in India, and what incentives or programs can effectively overcome resistance to change?
SME uptake is critical for widespread AI impact; understanding barriers and designing effective outreach is necessary for national AI rollout.
Speaker: Dr. Aditya Yad
What governance frameworks and technical safeguards are required to ensure that patient data is never used to train LLMs, while still allowing model improvement?
Protecting patient privacy while maintaining model performance is a core safety challenge that needs concrete policies and audits.
Speaker: Chris Ciauri
How can multilingual AI models be further developed and validated for the 12+ Indic languages and their dialects to ensure reliable healthcare assistance across India’s linguistic diversity?
Language coverage directly affects accessibility; rigorous evaluation is needed to avoid errors in non‑English contexts.
Speaker: Chris Ciauri
What empirical evidence is needed to substantiate claims that AI can reduce drug discovery cycles from weeks to hours, and how can these reductions be measured and validated in real‑world settings?
Quantifying AI’s impact on drug development is crucial for investment decisions and regulatory acceptance.
Speaker: Chris Ciauri
How will AI integration into biomanufacturing (e.g., biofoundries) affect yield, cost, and scalability of biologics in India, and what research is required to assess ROI and regulatory implications?
AI‑enabled manufacturing could lower costs of expensive biologics; understanding economic and compliance aspects is essential for policy support.
Speaker: Dr. Aditya Yad
What frameworks and standards are needed to incorporate AI into clinical trial design, regulatory science, and drug approval processes over the next 5–10 years?
AI can streamline trials but must align with regulatory requirements; clear standards will facilitate safe adoption.
Speaker: Dr. Sabine Kapasi
How can sustainable business models be created for AI‑driven diagnostic tools in low‑income settings where patients often pay out‑of‑pocket, ensuring affordability and insurance coverage?
Diagnostics are a gateway to early treatment; without viable financing models, adoption will be limited.
Speaker: Dr. Sabine Kapasi
What metrics and methodologies should be used to measure the ROI of AI‑driven reductions in administrative burden for clinicians globally?
Quantifying time and cost savings is needed to justify AI investments and to guide policy and reimbursement decisions.
Speaker: Chris Ciauri

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.