Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with a brief overview of Cisco’s presentation on agentic and physical AI and a reminder that future technology will be built by humans using AI, not by AI itself [1-2]. The moderator then introduced Qualcomm CEO Cristiano Amon as a leader shaping wireless technology and edge AI [4-5].


Amon described the “next chapter of AI” as a transition from chat-based interfaces to autonomous agents that will be embedded everywhere, especially on edge devices [13-18][20-22]. He emphasized that Qualcomm’s chips enable AI to run locally on billions of devices, changing the human-computer interface by allowing machines to understand vision, speech and intent [21-24][25-27]. According to Amon, agents will replace traditional smartphones, operating systems and app stores by directly interpreting user intentions and acting across any connected hardware [26-31][32-35]. These agents can be accessed not only from phones but also from wearables such as smart glasses, pendants, or earbuds, creating a multi-device ecosystem [36-39][51-55].


He illustrated this with a scenario where smart glasses recognize a product, initiate a purchase on an e-commerce platform, and handle payment autonomously [51-58][59-60]. Amon argued that the cloud-vs-edge debate is misplaced because intelligence will be distributed across the cloud, near-edge, network and on-device, with each location handling tasks suited to latency and context [65-71][74-78]. He noted that future agents must be fast, context-aware, and seamlessly blend cloud and device processing without user friction [79-84][90-93].


Looking ahead, Amon highlighted that 6G will embed AI directly into the telecom network, turning it into a large-scale sensing platform that can map environments and support services such as autonomous driving and drone traffic management [127-134][136-144]. He stressed that this AI-enabled network will generate massive private data-far beyond publicly available internet data-providing richer context for personalized models [96-99]. Amon pointed to India’s high mobile data consumption and manufacturing ambitions as a prime opportunity to lead the AI-driven transformation across industries such as smart manufacturing, cities, health, education and agriculture [152-166][168-170]. He concluded that Qualcomm’s unique ability to produce chips ranging from sub-2 mW earbuds to 2 kW data-center processors positions the company to help realize this agent-centric future while enabling partners worldwide [103-106][110-112][175].


Keypoints


Major discussion points


AI agents will become the primary interface, replacing traditional operating systems and apps.


Amon describes a shift where “the smartphone … is going to get replaced by an agent” and that “the agent is going to be at the very center” of the mobile ecosystem, accessible from phones, glasses, wearables, and other devices [24-30][35-38][106-108].


Edge AI and distributed intelligence will blur the cloud-vs-edge debate.


He argues that “it does not matter” whether processing is on the cloud or the edge, emphasizing that “intelligence is going to be incredibly distributed across the cloud, across the near edge, the network … and on-device” and that tasks will be split transparently for speed and relevance [65-71][74-78][80-88][90-94].


The next generation of wireless (6G) will embed AI into the network itself, creating a large-scale sensing and services platform.


The talk moves from the history of telecom to “6G … will provide an evolution of connectivity” and highlights that “the biggest part of 6G is AI … the network … will sense everything around you” enabling new use-cases such as traffic management, drone detection, and autonomous-driving support [110-118][127-135][136-145].


AI-driven transformation presents massive opportunities for India across multiple sectors.


Amon links the AI wave to India’s “incredible opportunity,” citing potential impacts on smart manufacturing, smart cities, healthcare, education, agriculture, and overall economic growth [148-155][160-168].


Qualcomm’s unique semiconductor breadth positions it to enable this AI-centric future.


He notes Qualcomm’s capability to produce chips ranging “from sub-2 milliwatts … to 2,000 watts per chip on the data center,” underscoring the company’s role in powering agents and AI across every class of device [103-106][104-105].


Overall purpose / goal


The discussion aims to articulate Qualcomm’s vision for the “next chapter of AI,” emphasizing the rise of agentic AI, the necessity of edge-distributed processing, and the pivotal role of upcoming 6G networks. It seeks to position Qualcomm as the hardware and software enabler of this ecosystem while highlighting strategic opportunities for India’s economy and industry.


Overall tone


The tone is consistently enthusiastic, forward-looking, and confident. Amon’s language is optimistic (“incredibly excited,” “incredible opportunity”) and visionary, with brief moments of clarification (e.g., dismissing the cloud-vs-edge debate) that do not diminish the overall upbeat and persuasive mood. The tone remains steady throughout, reinforcing a sense of momentum and possibility.


Speakers

Cristiano Amon


– Role/Title: President and Chief Executive Officer, Qualcomm [S1][S2][S3]


– Areas of Expertise: Artificial Intelligence, semiconductor technology, wireless communications, edge computing, mobile computing [S2]


Speaker 1


– Role/Title: Event moderator/host (role not specified) [S4][S5][S6]


– Areas of Expertise: 


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights

The session opened with a brief recap of Cisco’s presentation on agentic and physical AI, ending with a reassurance that the future will be built by humans who can confidently harness AI rather than by AI itself [1-2]. The moderator then highlighted AI’s presence beyond the cloud – in pockets, cars and factories [6] – before introducing Qualcomm’s President and CEO, Cristiano Amon, as a leading figure in wireless technology and edge-AI innovation [4-9].


Amon described the “next chapter of AI” as a shift from chat-box interactions to autonomous agents that will be embedded everywhere [13-18]. He stressed that Qualcomm’s silicon enables AI to run locally on billions of devices, fundamentally changing the human-computer interface by allowing machines to understand vision, speech and intent without users having to learn new interaction paradigms [21-27]. This perspective aligns with the earlier human-centric framing, underscoring that AI is a tool that augments human agency [2][30-31].


He argued that the smartphone will be superseded by an “agent” that replaces operating systems and app stores, becoming the primary platform for interaction [48-52]. Because the agent is not tied to a single device, it can be accessed from phones, smart glasses, pendants or other wearables, creating a multi-device ecosystem where the agent sits at the core [36-39][106-108].


Amon also highlighted that AI will be trained on physical signals – “physical AI” – using sensor data from cameras, radars and other on-device sensors, allowing agents to operate on information gathered from the physical world across every computer [84-86].


To illustrate this vision, he presented a scenario in which smart glasses equipped with an agent recognise a product, query an e-commerce platform, and complete a purchase-including payment and receipt generation-without the user touching a screen [51-60]. He linked this consumer-level transformation to a broader industrial revolution affecting robotics and manufacturing, suggesting that similar agent-driven workflows will reshape those sectors [61-63].


Addressing the often-cited cloud-vs-edge debate, Amon argued that the distinction is misplaced; intelligence will be distributed seamlessly across cloud, near-edge, network and on-device resources [65-78][80-94]. Tasks requiring instant response or highly personal context will run on the device, while others will be processed in the cloud, with the split being transparent to the user [80-94]. He emphasized that agents must be fast, relevant and friction-free, delivering real-time responses such as “who is this person?” or “translate this” [79-84][90-93].


Looking further ahead, Amon positioned 6G as more than a speed upgrade. The upcoming generation will embed AI directly into the telecom fabric, turning the network into a large-scale sensing platform that provides environment mapping, traffic-management systems, support for fully autonomous vehicles, drone detection, and an aerial wide-area network [122-130][110-118][127-135][136-145]. This AI-infused network will generate massive private data streams-far exceeding publicly available internet data-providing rich contextual information for personalised models [96-99][100-102].


Amon then turned to India’s strategic advantage. He noted the country’s exceptionally high mobile data consumption and its rapid leapfrogging of fixed-line internet, which together create a fertile ground for AI-driven transformation [152-154]. Coupled with India’s emerging role as a global manufacturing hub, these factors open opportunities across smart manufacturing, smart cities, healthcare, education and agriculture, aligning with the AI Summit’s ambitions [148-166][168-170].


Finally, Amon underscored Qualcomm’s unique position to enable this agent-centric future. The company’s semiconductor portfolio spans ultra-low-power chips for earbuds to multi-kilowatt data-centre processors, allowing it to power AI workloads across the entire device spectrum [103-106][104-105]. Rather than seeking to own all innovation, Qualcomm adopts a partner-centric model, providing the hardware and software foundations that enable ecosystems to build and deploy agents [172-174]. He concluded with optimism about the transformative potential of AI and thanked the audience [175].


Session transcriptComplete transcript of the session
Speaker 1

That was really an interesting session by CEO Cisco, highlighting the agentic AI, the role of agentic AI, as well as the physical AI and the current scenario. And also the last line was really an assuring line saying that the future will not be built by AI, but by humans who can confidently put AI to use. Well, ladies and gentlemen, moving on. Now it’s my honor to introduce a leader who’s been at the forefront of shaping the future of wireless technology and intelligent computing. Mr. Cristiano Amon is the president and chief executive officer of Qualcomm, a company that has defined and continues to redefine the global compute connectivity and AI landscape. And well, AI doesn’t just live in the cloud, it runs in your pocket, in your car, in the factory floors.

And Mr. Amon is leading Qualcomm’s push to bring powerful AI processing to the edge. enabling billions of devices to think locally and act intelligently. Ladies and gentlemen, it’s my pleasure to invite Mr. Amon, President and CEO of Qualcomm, to the stage. Please give a round of applause.

Cristiano Amon

Good afternoon, everyone. Very, very happy and privileged to be here. I’m incredibly excited and energized about what’s happening here in India with AI and I think what’s happening with AI in general. What I’d like to talk to you today is about the next chapter of AI. And this is something that’s very near and dear to Qualcomm. We’ve been talking about this because I think we’re really entering now the next phase of AI. As AI gets developed, it’s going to be part of everything that we do. And especially… the interaction that we have with computers and with digital… So intelligent is now shifting for something that we kind of started and we all experience going to, you know, a chat box and asking questions into something that is going to be all around us and everywhere all the time, especially with the devices.

I actually love the presentation right before from my friend Jitu from Cisco when he talked about the traffic change from chat box to agents. And this is important. You know, I’ve been often talking about this, how we should be thinking about AI in a much broader sense. And it’s easier for a company like Qualcomm to talk about this because we build a lot of the chips that go into devices where the humans are. So as you create AI in the data center and you train and create those models, all this data. And you deploy this, you’re starting to see that this gets utilized in different ways. One fundamental thing that AI is doing for us.

it is changing the human computer interface because we don’t have to now learn how to use a computer if you know i’ve been uh often talking about this in different presentations we learn how to use an s2 keyboard and we still use that on a laptop then we use like to touch a screen but now the ai understands what we see what we hear what we say what we write so in itself it’s changing computers it’s changing the devices we interact with and uh it’s becoming a pervasive technology that is going to be everywhere and i think that’s the mission i think of qualcomm when i think about uh what we’re going to do is the same way that what we did with mobile communications and the creation of of the computer that fits in the palm of your hand is the ability to take that intelligence everywhere so we’re going to be creating a number of important shifts in the industry and i want to start talking about the mobile industry we may have had the privilege as a company to be part of every single transition of wireless technologies and let’s talk today I’m going to talk about the next one that is coming as well and what we saw with the transition of wireless technology that fundamentally at every generation of wireless you saw big shifts not only in devices and companies and because of the transition especially for example when you went to the ability to have a phone that you carry with you all the way to connect the phone or the internet all of a sudden that phone became a computer and it started to drive the future of the internet like a country like India that leapfrogged I think the internet and went straight to the mobile internet and that’s going to be true again when you think about AI for example in the mobile ecosystem AI is constantly changing and it’s changing and it’s changing and it’s changing and it’s changing going to fundamentally change how we think about the mobile device All of you today, and me included, I think we look at our smartphone, our inseparable device, most of our digital life is.

And the smartphone today is at the center of everything that we do. But now that’s going to get replaced by an agent. Now, when you think about the entire value chain that got created, for example, for the mobile industry, there’s an enormous amount of value on things like OSs and application stores. And that becomes like the platform when you’re going to develop an application that you’re going to do different things into the platform. An agent that now understands human intentions because, you know, you just need to tell him what you want. Or he’s going to see what you see and make a decision for you, assuming you will authorize it. it. When that happens, that’s where the value is because then the agent is free.

It can go to the internet and do things. It can go to your phone and do things. And you’re no longer bound by constructs of your hardware or your apps in the application. So as a result, we expect the AI is going to have a fundamental shift in the mobile industry where the agent is going to be at the very center. And as the agent is at the very center, everything surrounds the agent. You can access the agent from your mobile phone, but you can also access the agent from your glasses or for a pendant or for anything that you wear. And I think we’re going to look at the mobile ecosystem right now, not only as a single device experience, but you’re going to connect to agents across multiple types of devices.

And I think that’s incredibly exciting. And that’s not only unique to what you’re going to see in consumers. That’s going to happen also with things, because you can also have create AI that’s going to get trained on different things. on physical signals, like physical AI, on sensor data, and you’re going to deploy that in every computer. So what’s exciting about AI, it’s going to very quickly evolve for something you go to a browser and you ask a question. And I think, as my colleague from Cisco said, it’s got train and all the public available data on the Internet. You’re now going to go to a different type of AI experience that’s going to be the fundamental software that is going to run in all the devices around us and how you’re going to have interaction with the devices.

So I also want to basically, you know, as we think about this future, I just want to give you an example. What we saw across the industry is workloads or use cases have shifted. Devices didn’t go anywhere, but their workloads shifted. We used to do a lot of things in the early days of the Internet on your laptop. And forget. For example, e -commerce, you will do it on your laptop. Now, most of the e -commerce in the world is done on a phone. Tomorrow, or it could be like as early as, you know, within the end of this year, as you start to see the proliferation of glasses. If you have a glass that has agents, is connected to the Internet, has camera on those smart glasses, the glass see what you see.

You can just look at something and say, I’d like to buy this. What is, you know, can you check this? For example, check this on Flipkart. Just buy it for me. I’d like to buy this. Integration of payment system. You got a bill, say, pay this, notify me when I’m done, and so forth. So I think we’re going to see this fundamental change of devices. But that’s also going to be true about the revolution that’s happening in robotics and the revolution that exactly happened on industrials. So that’s an incredible opportunity. And we have been incredible. Incredibly focused as a company to basically drive that future of computing. There’s also a big debate, which I believe is the wrong way to look into that, which is about cloud and edge.

There’s a lot of debate about, oh, this is going to be running on the cloud. This is going to be running on the edge. And actually, it does not matter. Think about your device today. Your smartphone today has incredible amount of processing power, and there’s a number of different things that run in your smartphone. If you put it on airplane mode, you probably don’t use it. You just put it back and wait until you get connectivity again. It’s the most cloud -connected device because those things work as a one. And you’re going to have now intelligence that’s going to be incredibly distributed across the cloud, across the near edge, the network in itself, in and on device.

And it’s all going to work similar. There are going to be things that you’re going to be able to do on the device because they’re… They require an instant response or require unique context, unique information that is relevant to you. Something is going to do on the cloud and they’re both going to be growing and it’s going to be transforming how we think about computers. So I like to provide the simple, I think, a description. Let’s say we are all using agents and you’re going to pick the agents that you like and the agents to be useful. It needs to be fast. It needs to be relevant for you. Let’s say, go back to the example I provided on the glasses.

And you have those smart glasses and you’re walking around and you have a camera. Then all of a sudden you see somebody and you ask this glass, like it’s your friend next to you and say, who is this person? And you want to get a response. This is so and so. Or you’re going to say, can you translate this for me? What is this? Can you pay this for me? You want to, this thing has to be similar. Similar is no friction. So certain things are going to be done on your device and the thing’s going to be on the cloud. It’s going to be completely transparent to you. But the interesting thing is those agents, for them to be very useful, they needed to be contextually aware of what is relevant to you.

So over time, the agent I’m going to be using, the agent you’re going to be using, they need to be relevant to me. So you’re going to have a lot of things that are going to be being processed and understood about you. So much so that I believe that in the end game, I think it was said in the prior presentation from Cisco that all this available data that is publicly on the Internet that you train models, it’s a fraction of the data that is going to be generated. If you have, for example, a glass of a camera that sees everything that you see, try to annotate the image, get information about the image and the context, reads what you read.

And so forth, that is an incredible amount of data, and that’s going to be providing a lot of important context for those models that are going to be relevant to you. That is the future, and it’s an incredible transformation. It’s going to transform every industry. No industry is immune to this. And I think what we’re doing at Qualcomm is really creating the future hardware and software that will help enable this future across all the devices. We’re a very unique semiconductor company. I think we’re probably one of the few companies that can be working on chips from sub -2 milliwatts to a smart earbud that you’re going to wear all the way to now 2 ,000 watts per chip on the data center.

But I think that’s the incredible future that AI is going to transform every single computer. And the agents are going to be at the center of the experience. It’s going to replace a lot of the OSs and applications. And that is the new future of technology, including the future of mobility. And that’s why we’re incredibly excited about this. And with that, I want to talk about something that is happening, which is about the next generation of wireless technologies. I would like to provide an example from the past. When you think about telecom networks, and I think we’re probably one of the, you know, American telecom companies that really focus on the evolution of cellular technology.

When you think about the evolution of this sector, when this all started, it was about providing a telephone. I think all of us was an incredible thing. You have a twisted copper pair to get to your home. You pick up. You get a dial tone. You dial. And eventually, you could dial. Anybody in the world of a telephone. Even how cellular started was about making sure all of us had the ability to carry a telephone. That was 2G, that you can call everyone. That’s different today. Now you have a very high performance broadband network for data. Voice is just one application in the many applications that you do with the network. It fundamentally changed the nature of the infrastructure.

The equipment was different. The use case is different. We’re heading to the next big transformation of the telecom sector. So 6G is going to provide an evolution of connectivity, faster speed, lower latency, higher coverage. But that’s not the story. That’s just a piece of the story. It’s just continue to improve the connectivity. The biggest part of 6G is AI, like I said before, is now going to come to the telecom network. And that becomes a large scale. 6G. AI network that is processing and get trained on all of the signals that happens at the network and providing new capabilities. One of the biggest features of 6G is the network, is the sensing network at scale.

I’m going to give an example. The network not only will provide a connectivity between your device and the Internet, but will sense everything that’s around you. We’ll use techniques that you see today in autonomous driving cars, like radars, as an example, to detect your environment. It’s going to provide a map of everything that is happening at scale. And you’re going to have completely different type of services for different industries. It will provide context for your agents. Very important. And the network will have that role. It will provide traffic management systems and some of the use cases that are going to be part of full self -driving cars. It will do drone detection and manage the traffic control.

Off the economy, there’s going to be an aerial in the wide area network and much more. because AI is also going to the network. It’s going to be one of the biggest transitions I think we have, as big as going from voice to data, and it’s all going to be part of this future of AI. And I just want to now make another parallel, I think, to the presentation from my colleague from Cisco. It puts a fine point on the network that needs to be built, the capability of the infrastructure, the security and trust, but that is an incredible future with technology. And as I get to the end of the presentation, I want to highlight that India has an incredible opportunity with this transformation.

We have seen that those big shifts in technology creates opportunity, change players. It changed, I think, the role of different countries as they provide globally. It’s a global scale for the technology, and that’s an incredible opportunity for India. I look of what happened in mobile in India, and one of the largest data consumption per user in mobile devices in the world is in India. The whole Internet is mobile. When you think about the potential and all of the things that I just discussed about how AI is going to change everything, creates new device, new experiences, new services, that becomes a massive opportunity. And when I look at the ambitions that were set by the AI Summit, I’m going to provide just some examples.

Those are just examples. It can be very broader, but I just want to connect with some of the ambitions of the Summit. There is a process of jumping into a large -scale industrialization. India is becoming a global manufacturing hub as well. And with AI, you… You go from the very beginning. with smart manufacturing and automation with incredible change that is happening in this sector enabled by those technologies. Same thing with smart cities, the ability to continue to evolve the infrastructure, the ability to use AI to increase the scale, the reach, the access for healthcare. How you change education. Those are incredibly powerful learning tools. The ability to actually use some of those technologies to empower people with information and you’re going to have an ongoing learning experience.

Think about those agents with you all the time answering questions, telling you how to do things, especially when you think of context, for example, of those new devices such as smart glasses. And it can fundamentally change industries, for example, such as agriculture. Right. Right. Right. Just a few examples of the potential of connecting this technology with everything, I think, that is going on in India. It’s an incredible and exciting future enabled by AI. And really, it’s about meeting the ambition of democratizing this technology for everyone and actually have an important role in increasing the global welfare. And, you know, as a company that has always been focused on enabling our partners and other industries to innovate, I think the history of Qualcomm, we never believe is the job of one company to be responsible for all the innovation.

It’s really to enable many industries and partner. We’re incredibly excited to play a very small part on this mission. Thank you very much for the opportunity to talk with all of you and

Related ResourcesKnowledge base sources related to the discussion topics (15)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“Cristiano Amon is President and CEO of Qualcomm.”

The knowledge base lists Amon as President and CEO of Qualcomm in multiple entries, confirming his role [S1] and [S3].

Confirmedmedium

“The future will be built by humans who can confidently harness AI rather than by AI itself.”

S29 emphasizes enhancing rather than replacing humanity with AI, supporting the view that humans remain the primary builders of the future.

Confirmedhigh

“AI’s next chapter will shift from chat‑box interactions to autonomous agents embedded everywhere.”

S15 describes a future of agent‑first interfaces replacing traditional app‑based interactions, aligning with the claim about autonomous agents becoming ubiquitous.

Additional Contexthigh

“Qualcomm’s silicon enables AI to run locally on billions of devices, removing the need for constant cloud access.”

S53 notes that today’s premium smartphones, AR glasses, and PCs can run large models locally, eliminating the need for continuous cloud connectivity, providing context for Qualcomm’s claim.

Confirmedhigh

“The smartphone will be superseded by an “agent” that replaces operating systems and app stores as the primary interaction platform.”

S15 predicts a shift toward agent‑first interfaces that could replace traditional OS/app‑store models, confirming the reported vision.

Additional Contextmedium

“Physical AI will be trained on sensor data (cameras, radars, etc.) so agents can act on information from the physical world across every computer.”

S56 discusses physical AI in robotics, using sensor data to drive optimization and decision‑making, adding nuance to the claim about sensor‑driven agents.

Confirmedhigh

“6G will embed AI directly into the telecom fabric, turning the network into a large‑scale sensing platform for mapping, traffic management, autonomous vehicles, and drone detection.”

S12 outlines that 6G will integrate AI across radio, core, and sensor ecosystems, enabling environment mapping and support for autonomous systems, confirming the described capabilities.

External Sources (57)
S1
Lift-off for Tech Interdependence? / DAVOS 2025 — – Cristiano Amon: President and CEO at Qualcomm Cristiano Amon: What I’ll say is, technology is moving very, very fast…
S2
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon — This discussion features Cristiano Amon, President and CEO of Qualcomm, presenting his vision for the next chapter of ar…
S4
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S5
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S6
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S7
WS #110 AI Innovation Responsible Development Ethical Imperatives — Dr Zhang Xiao: Thank you everyone. I’m glad to be involved in this interesting discussion and I have three points to sha…
S8
Digital Humanism: People first! — Alfredo M. Ronchi: Thank you very much. Thank you very much, NK, for your contribution. So at the end, we’ll try to summ…
S9
Steering the future of AI — ## Future Predictions and International Cooperation ## Open Source Development Advocacy ### Challenges and Responses …
S10
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — All right, I’m just going to click through this. This is good. This is probably a good indication of why the edge matter…
S11
AI for Good Technology That Empowers People — But with AGI, we don’t have to worry about that. Apart from that, I do want to touch on one thing. That is Qualcomm, one…
S12
Designing Indias Digital Future AI at the Core 6G at the Edge — Okay. Can we start? Great. So, good morning, our distinguished guests. My colleagues from the Government, Industry and A…
S13
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — And India is definitely leading the way in terms of application layer. There’s no doubt about that. Now, of course, with…
S14
The Global Power Shift India’s Rise in AI & Semiconductors — The panelists emphasized that true AI leadership requires alignment across four key pillars: silicon, software, systems,…
S15
From Innovation to Impact_ Bringing AI to the Public — The future will be agent-first interfaces rather than traditional app-based interactions
S16
Future Network System as Open Platform in Beyond 5G/6G Era | IGF 2023 Day 0 Event #201 — Abhimanyu Gosain:I get the easy question here. So it’s artificial intelligence and machine learning, right? So that’s so…
S17
Connecting the Unconnected in the field of Education Excellence, Cyber Security & Rural Solutions and Women Empowerment in ICT — Future Technology and 6G Development Infrastructure | Cybersecurity ORAN Alliance Next Generation Research Group worki…
S18
Skilling and Education in AI — The conversation began with a Professor’s detailed analysis of four critical sectors where AI can drive substantial impa…
S19
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — Success in building these capabilities will position India as an attractive destination for high-technology manufacturin…
S20
Closing plenary: multistakeholderism for the governance of the digital world — The burgeoning domain of digital transformation heralds a revolution in innovation and is epitomised by the rapid spread…
S21
Secure Finance Risk-Based AI Policy for the Banking Sector — It calls for institutional mechanisms that allow individuals to seek clarification and redress where automated decisions…
S22
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon — “An agent that now understands human intentions because, you know, you just need to tell him what you want.”[32]. “You c…
S23
DiploNews – Issue 329 – 1 August 2017 — ​The field of artificial intelligence (AI) has seen significant advances over the past few years, in areas such as smart…
S24
JANUARY 14 TH , 2019 — Cybersecurity is increasingly important in a society with growing prevalence of information systems, many of which have …
S25
Lightning Talk #209 Safeguarding Diverse Independent NeWS Media in Policy — ## Background and Research Context None identified beyond those in the speakers names list.
S26
Laying the foundations for AI governance — Lan Xue: Okay. I think my job is easier. I can say I agree with all of them. So I think that’s probably the easiest way….
S27
morning session — In addition to the discussions surrounding confidence-building measures and the BWC, this expanded summary also emphasiz…
S28
Table of contents — + Even though Estonia is esteemed as a digital country in the world, our attention and resources are largely directed to…
S29
Enhancing rather than replacing humanity with AI — Right now, amid valid concerns about displacement, manipulation, and loss of human agency, there are also real examples …
S30
Comprehensive Report: Preventing Jobless Growth in the Age of AI — Companies should focus on augmenting human capabilities rather than replacing workers entirely
S31
How AI Is Transforming Diplomacy and Conflict Management — Maintaining human agency is crucial – people should be ‘above the algorithm’ rather than ‘below’ it
S32
WSIS Action Line C10: Ethics in AI: Shaping a Human-Centred Future in the Digital Age — Low level of fundamental disagreement with moderate differences in implementation strategies. The speakers largely agree…
S33
Welcome Address — “How to make AI machine -centric and human -centric?”[33]. “Friends, the future of work will be inclusive, trusted, and …
S34
Agentic AI in Focus Opportunities Risks and Governance — Data governance and quality control are foundational since agents make decisions based on data without human empathy or …
S35
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon — “An agent that now understands human intentions because, you know, you just need to tell him what you want.”[32]. “You c…
S36
From Innovation to Impact_ Bringing AI to the Public — The future will be agent-first interfaces rather than traditional app-based interactions
S37
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — Real-world implementations are already emerging. ByteDance has introduced an AI-first smartphone in China that eliminate…
S38
How AI agents are quietly rebuilding the foundations of the global economy  — AI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search…
S39
Designing Indias Digital Future AI at the Core 6G at the Edge — The convergence of AI and 6G will create a distributed computing fabric that extends far beyond traditional network boun…
S40
AI for Good Technology That Empowers People — “So, you know, AI being available at the edge, not from, you know, the very basic thing that we all use every day is you…
S41
Trusted Connections_ Ethical AI in Telecom & 6G Networks — And let’s do it. India can show the direction forward. For whole world. There is a tradition for great. collaboration, g…
S42
Connecting the Unconnected in the field of Education Excellence, Cyber Security & Rural Solutions and Women Empowerment in ICT — Future Technology and 6G Development Infrastructure | Cybersecurity ORAN Alliance Next Generation Research Group worki…
S43
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — Success in building these capabilities will position India as an attractive destination for high-technology manufacturin…
S44
Closing plenary: multistakeholderism for the governance of the digital world — The burgeoning domain of digital transformation heralds a revolution in innovation and is epitomised by the rapid spread…
S45
From India to the Global South_ Advancing Social Impact with AI — AI is the new electricity. The question is who has the switch? And today that’s what we will be discussing. You know, if…
S46
A Digital Future for All (afternoon sessions) — AI is enabling economic progress and entrepreneurship, especially in emerging markets. It can boost productivity across …
S47
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-cristiano-amon — And so forth, that is an incredible amount of data, and that’s going to be providing a lot of important context for thos…
S48
Qualcomm is building generative AI into its next generation of chips — Next year,Qualcomm is set to bring generative AIto premium phones, thanks to their high-performance chips. This means th…
S49
Qualcomm brings new AI power to mobile chips — Qualcommis integratingadvanced AI technology from its laptop processors into mobile phone chips. The new Snapdragon 8 El…
S50
AI Development Beyond Scaling: Panel Discussion Report — Humans can adapt to new technologies over time, becoming stronger through coexistence rather than avoidance
S51
Tech Transformed Cybersecurity: AI’s Role in Securing the Future — Helmut Reisinger:Yeah. Good afternoon, everybody. As-salamu alaykum. I am representing Palo Alto Networks. We are a cybe…
S52
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — And we are so grateful to be partnering with India in this journey ahead. So thank you all. Take care. But we are runni…
S53
Comprehensive Discussion Report: AI’s Existential Challenge to Human Identity and Society — The discussion’s foundation rested on Harari’s crucial distinction between AI as an agent rather than a mere tool. He ar…
S54
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
S55
National Strategy for Artificial Intelligence — –  Computer vision/identification of objects in images: can be used for purposes such as facial recognition or for iden…
S56
Comprehensive Summary: The Future of Robotics and Physical AI — And then using that information data to help to drive optimizations, longevity of the asset, predicting failures, those …
S57
AI, smart cities, and the surveillance trade-off — The danger isn’t the technology itself, but the assumption that AI-driven solutions are politically neutral, that algori…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
1 argument154 words per minute185 words71 seconds
Argument 1
Future built by humans using AI – Human‑centric AI (Speaker 1)
EXPLANATION
The speaker stresses that AI should serve as a tool for people rather than replace human agency. He underscores that the future will be shaped by humans who can confidently apply AI technologies.
EVIDENCE
He refers to the Cisco session, noting that the concluding remark assured that “the future will not be built by AI, but by humans who can confidently put AI to use” and that the session highlighted the role of agentic AI [2][1].
MAJOR DISCUSSION POINT
Human‑centric AI
AGREED WITH
Cristiano Amon
DISAGREED WITH
Cristiano Amon
C
Cristiano Amon
12 arguments163 words per minute3022 words1111 seconds
Argument 1
AI shifting from chatbots to pervasive agents that understand intent across devices – Agentic AI shift (Cristiano Amon)
EXPLANATION
Amon describes a transition from simple chat‑box interactions to intelligent agents that can interpret user intent across many device types. He frames this as a fundamental change in how AI will be experienced daily.
EVIDENCE
He praises the Cisco presentation for highlighting the “traffic change from chat box to agents” and explains that AI is moving from a single chat interface to being “all around us and everywhere all the time, especially with the devices” [18-19][17-18].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote highlights the move from chat-box interactions to intent-understanding agents across devices, corroborated by the session transcript quoting agents that understand human intentions [S2].
MAJOR DISCUSSION POINT
Agentic AI shift
Argument 2
Qualcomm’s chips enable AI processing on the edge, allowing billions of devices to think locally – Edge AI enablement (Cristiano Amon)
EXPLANATION
Amon points out that Qualcomm designs the silicon that powers AI at the edge, making it possible for countless devices to run AI locally without relying on distant data‑centers. This edge capability is presented as a key differentiator for Qualcomm.
EVIDENCE
He notes that Qualcomm builds “a lot of the chips that go into devices where the humans are” and later emphasizes the company’s ability to produce chips ranging from “sub-2 milliwatts to … 2,000 watts per chip on the data centre” [21][104-105].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Qualcomm’s role in delivering edge AI chips is supported by a workshop noting the importance of on-device processing and Qualcomm’s edge-focused silicon portfolio [S10].
MAJOR DISCUSSION POINT
Edge AI enablement
Argument 3
Agents will become the central platform, superseding OSs and app stores – Central agent platform (Cristiano Amon)
EXPLANATION
Amon argues that future software ecosystems will revolve around intelligent agents rather than traditional operating systems or app marketplaces. These agents will act as the primary interface for user intent.
EVIDENCE
He explains that “the AI is going to have a fundamental shift in the mobile industry where the agent is going to be at the very center” and that agents will be “free” to act across the internet, phones and other services, removing dependence on hardware or apps [30-36].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The argument that agents will become the primary software platform aligns with remarks that agents will sit at the centre of the mobile ecosystem, superseding traditional OS and app stores [S2].
MAJOR DISCUSSION POINT
Central agent platform
Argument 4
Value will move to context‑aware agents accessible via phones, glasses, wearables, etc. – Cross‑device agents (Cristiano Amon)
EXPLANATION
Amon envisions agents that understand context and can be reached from any personal device, from smartphones to smart glasses or wearables. This cross‑device accessibility is presented as the next source of value creation.
EVIDENCE
He states that “you can access the agent from your mobile phone, but you can also access the agent from your glasses or for a pendant or for anything that you wear” and illustrates the concept with a scenario where smart glasses recognize a product and complete a purchase on Flipkart [37-38][51-58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cross-device accessibility of agents via phones, glasses, and wearables is directly referenced in the keynote transcript [S2].
MAJOR DISCUSSION POINT
Cross‑device agents
Argument 5
The cloud/edge debate is misplaced; intelligence will be seamlessly distributed across cloud, near‑edge, and on‑device – Distributed AI view (Cristiano Amon)
EXPLANATION
Amon dismisses the binary cloud‑vs‑edge argument, asserting that AI workloads will be spread organically across the entire continuum—from data‑centers to the device itself. He frames this distribution as inevitable and beneficial.
EVIDENCE
He calls the debate “the wrong way to look into that” and says “it does not matter” whether AI runs in the cloud or at the edge, then describes how intelligence will be “incredibly distributed across the cloud, across the near edge, the network in itself, in and on device” [65-68][70-77].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The claim that the cloud-vs-edge debate is misguided and that intelligence will be distributed across cloud, edge, and device is echoed in the session summary [S2].
MAJOR DISCUSSION POINT
Distributed AI view
Argument 6
Certain tasks require instant on‑device response while others run in the cloud; both will grow together – Task allocation model (Cristiano Amon)
EXPLANATION
Amon explains that latency‑sensitive functions will stay on the device, while compute‑intensive or context‑rich tasks will be handled in the cloud. Both domains will expand as AI adoption rises.
EVIDENCE
He notes that “things that you’re going to be able to do on the device because they require an instant response” coexist with “something is going to do on the cloud” and adds that agents must be “fast” and “relevant” to the user, making the split transparent [76-82][90-92].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The distinction between latency-sensitive on-device tasks and compute-intensive cloud tasks is discussed in the same keynote, confirming the dual growth model [S2] and reinforced by edge-AI workshop insights [S10].
MAJOR DISCUSSION POINT
Task allocation model
Argument 7
6G will embed AI into the network, providing large‑scale sensing and contextual data beyond higher speed/latency – AI‑infused 6G (Cristiano Amon)
EXPLANATION
Amon claims that the next generation of wireless (6G) will not only improve speed and latency but will also integrate AI directly into the network fabric, turning the network itself into an intelligent sensor platform.
EVIDENCE
He describes 6G as “the biggest part of 6G is AI, like I said before, is now going to come to the telecom network” and that it will become “an AI-powered network that is processing and get trained on all of the signals” [127-133].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Statements that 6G will embed AI into the network for large-scale sensing are present in the keynote and in a separate discussion on 6G at the edge [S2] [S12].
MAJOR DISCUSSION POINT
AI‑infused 6G
Argument 8
The AI‑powered network will support new services such as autonomous driving, drone detection, and aerial wide‑area connectivity – New network services (Cristiano Amon)
EXPLANATION
Amon outlines concrete use‑cases enabled by an AI‑infused 6G network, including traffic management for self‑driving cars, drone detection, and wide‑area aerial connectivity. These services illustrate the broader societal impact of the technology.
EVIDENCE
He explains that the network will “provide a map of everything that is happening at scale” and will enable “traffic management systems… full self-driving cars… drone detection… aerial wide-area network” [138-144].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Examples of AI-powered network services such as autonomous driving and drone detection are mentioned in the keynote description of 6G capabilities [S2].
MAJOR DISCUSSION POINT
New network services
Argument 9
India’s massive mobile data usage and emerging manufacturing base position it to lead AI adoption – India’s strategic advantage (Cristiano Amon)
EXPLANATION
Amon highlights India’s high per‑user mobile data consumption and its growing role as a manufacturing hub as key factors that give the country a strategic edge in AI deployment. He suggests that these strengths can translate into leadership in the AI era.
EVIDENCE
He cites that “one of the largest data consumption per user in mobile devices in the world is in India” and that “the whole Internet is mobile”, linking this to a “massive opportunity” for the country [152-154][149-151].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s high mobile data consumption and manufacturing growth are highlighted as strategic AI advantages in analyses of India’s AI leadership [S13] [S14] [S12].
MAJOR DISCUSSION POINT
India’s strategic advantage
Argument 10
AI will drive smart manufacturing, smart cities, healthcare, education, and agriculture, democratizing technology – Cross‑industry impact (Cristiano Amon)
EXPLANATION
Amon describes AI as a catalyst for transformation across multiple sectors, from manufacturing and smart cities to health, education and agriculture. He frames this as a democratizing force that will broaden access to advanced services.
EVIDENCE
He lists examples such as “smart manufacturing and automation”, “smart cities”, “increase the scale, the reach, the access for healthcare”, “powerful learning tools” for education, and “agriculture” as areas where AI will have impact [160-168].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The broad societal impact of AI across manufacturing, cities, health, education, and agriculture is noted in reports on India’s AI ecosystem and global AI adoption [S13] [S14].
MAJOR DISCUSSION POINT
Cross‑industry impact
Argument 11
Qualcomm’s portfolio spans ultra‑low‑power to high‑power chips, enabling AI across every device class – Broad chip spectrum (Cristiano Amon)
EXPLANATION
Amon emphasizes Qualcomm’s unique ability to produce a wide range of semiconductor solutions, from sub‑2 mW chips for earbuds to multi‑kilowatt processors for data centres, thereby supporting AI on any device type.
EVIDENCE
He states that Qualcomm is “one of the few companies that can be working on chips from sub-2 milliwatts to a smart earbud … to now 2,000 watts per chip on the data centre” and calls the company “a very unique semiconductor company” [103-105].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Qualcomm’s ability to produce chips from sub-milliwatt to multi-kilowatt power levels is documented in technical overviews of its product portfolio [S10].
MAJOR DISCUSSION POINT
Broad chip spectrum
Argument 12
Qualcomm’s strategy is to empower partners and ecosystems rather than own all innovation, playing a focused enabling role – Partner‑centric approach (Cristiano Amon)
EXPLANATION
Amon asserts that Qualcomm sees its role as an enabler, providing technology that other industries and partners can build upon, rather than trying to control all innovation itself.
EVIDENCE
He remarks that “we never believe it is the job of one company to be responsible for all the innovation” and that Qualcomm’s mission is “to enable many industries and partner” [172-174].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Qualcomm’s philosophy of enabling partners rather than owning all innovation is reflected in statements about its Tech for Good program and partnership model [S11] [S2].
MAJOR DISCUSSION POINT
Partner‑centric approach
Agreements
Agreement Points
AI should serve humans and augment human agency rather than replace it
Speakers: Speaker 1, Cristiano Amon
Future built by humans using AI – Human‑centric AI (Speaker 1) One fundamental thing that AI is doing for us… it is changing the human computer interface because we don’t have to now learn how to use a computer… (Cristiano Amon)
Both speakers stress that AI is a tool that enhances human capabilities and that the future will be shaped by people who can confidently use AI, not by AI itself [2][24][30-31]
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors policy guidance that AI must augment rather than replace human work, as seen in recommendations for companies to focus on enhancing human capabilities [S30] and maintaining human agency above algorithms [S31]. It aligns with human-centred AI principles that place people before models and keep accountability with human operators [S34], echoed in the WSIS Action Line C10 on ethics and a human-centred future [S33] and broader calls to enhance rather than replace humanity with AI [S29].
AI will become pervasive across many device types and contexts
Speakers: Speaker 1, Cristiano Amon
And also the last line was really an assuring line saying that the future will not be built by AI, but by humans who can confidently put AI to use… (Speaker 1) And also AI doesn’t just live in the cloud, it runs in your pocket, in your car, in the factory floors. (Speaker 1) Agents will be accessible from phones, glasses, wearables and other devices, creating a cross‑device ecosystem (Cristiano Amon) Smart‑glass scenario where the agent sees what you see and can purchase items for you (Cristiano Amon)
Both speakers highlight that AI will be present everywhere-from pockets, cars and factories to phones, smart glasses and wearables-making AI a ubiquitous layer across devices [6][37-38][51-58]
POLICY CONTEXT (KNOWLEDGE BASE)
Industry leaders describe AI agents that can be accessed via phones, glasses, pendants and other wearables, indicating a pervasive future [S22]. The rapid diffusion of AI into smart vehicles, buildings, medical robots and education systems further supports this view [S23]. Policy discussions on AI governance also note the need for security and oversight as AI penetrates critical infrastructure and essential services [S24].
Similar Viewpoints
Both see AI as an augmentative technology that should be controlled and directed by humans, preserving human agency [2][24][30-31]
Speakers: Speaker 1, Cristiano Amon
Future built by humans using AI – Human‑centric AI (Speaker 1) One fundamental thing that AI is doing for us… it is changing the human computer interface because we don’t have to now learn how to use a computer… (Cristiano Amon)
Both emphasize the ubiquity of AI across a wide range of devices and environments, foreseeing a seamless, device‑agnostic AI experience [6][37-38][51-58]
Speakers: Speaker 1, Cristiano Amon
AI doesn’t just live in the cloud, it runs in your pocket, in your car, in the factory floors (Speaker 1) Agents will be reachable from phones, glasses, wearables, creating a cross‑device ecosystem (Cristiano Amon)
Unexpected Consensus
Overall Assessment

The two speakers converge on two main ideas: (1) AI must remain a human‑centric tool that enhances rather than replaces human agency, and (2) AI will become omnipresent, spanning from personal gadgets to industrial settings and across emerging form‑factors like smart glasses. These points reflect a moderate level of consensus, indicating shared expectations about the role and deployment of AI, which can inform policy discussions on AI governance, human‑centered design, and infrastructure planning.

Moderate consensus – agreement on high‑level principles (human‑centric AI, pervasive deployment) but limited overlap on more detailed technical or policy arguments.

Differences
Different Viewpoints
Human‑centric AI vs. agentic AI as the primary driver of the future
Speakers: Speaker 1, Cristiano Amon
Future built by humans using AI – Human‑centric AI (Speaker 1) Agents will become the central platform, superseding OSs and apps – Central agent platform (Cristiano Amon)
Speaker 1 stresses that the future will be built by humans who confidently use AI, positioning AI as a tool that serves human agency [2]. In contrast, Amon envisions intelligent agents taking the central role in the ecosystem, acting autonomously across devices and even replacing operating systems and applications, thereby making AI the primary engine of future technology [26-30][106-108].
Unexpected Differences
None identified
Speakers:
The transcript contains only introductory remarks from Speaker 1 and a detailed keynote from Amon. No other speaker raised a position that directly contradicts Speaker 1 beyond the human‑centric vs. agentic AI framing, and no surprise topics (e.g., security, environmental impact) were contested.
Overall Assessment

The discussion shows limited overt conflict. The principal disagreement centers on who should drive the AI‑enabled future: humans as the primary decision‑makers (Speaker 1) versus autonomous, intent‑understanding agents that will become the core platform (Amon). Apart from this, both speakers converge on the transformative potential of AI across industries and the necessity of edge deployment.

Low to moderate – the clash is conceptual rather than technical, focusing on agency and control. It suggests that while stakeholders agree on AI’s importance, policy and governance discussions will need to address the balance between human oversight and autonomous agent deployment.

Partial Agreements
Both speakers agree that AI will be a transformative force. Speaker 1 highlights the importance of AI in the session and its role in shaping the future [1][2], while Amon stresses that AI will affect every sector and that no industry is immune to its impact [100-102]. Their divergence lies in who (humans vs. autonomous agents) will steer that transformation.
Speakers: Speaker 1, Cristiano Amon
Future built by humans using AI – Human‑centric AI (Speaker 1) AI will transform every industry – Cross‑industry impact (Cristiano Amon)
Both recognize the importance of AI at the edge. Speaker 1 notes that AI runs in “your pocket, in your car, in the factory floors” [6], and Amon emphasizes Qualcomm’s chips that enable billions of devices to think locally [21][104-105]. They share the goal of widespread edge AI deployment but differ on the emphasis (human‑centric use vs. hardware enablement).
Speakers: Speaker 1, Cristiano Amon
Future built by humans using AI – Human‑centric AI (Speaker 1) Edge AI enablement (Cristiano Amon)
Takeaways
Key takeaways
AI is evolving from chat‑bot interfaces to pervasive, intent‑aware agents that operate across phones, wearables, glasses and other edge devices. Qualcomm’s chip portfolio enables AI processing on the edge, allowing billions of devices to run AI locally and act autonomously. Agents are expected to become the central platform, superseding traditional operating systems and app stores, and will be accessible from multiple device form‑factors. The cloud vs. edge debate is reframed: intelligence will be seamlessly distributed across cloud, near‑edge and on‑device, with tasks allocated based on latency and context requirements. 6G will embed AI into the network itself, providing large‑scale sensing, contextual data, and new services such as autonomous‑driving support, drone detection and aerial wide‑area connectivity. India’s massive mobile data consumption, emerging manufacturing capabilities, and large user base create a strategic advantage for leading AI‑driven transformation across smart manufacturing, smart cities, healthcare, education and agriculture. Qualcomm positions itself as an enabler, offering chips from sub‑2 mW to multi‑kilowatt data‑center solutions, and focusing on partnering with ecosystems rather than owning all innovation.
Resolutions and action items
None identified
Unresolved issues
How to effectively coordinate and standardize the distribution of AI workloads between cloud, edge and on‑device environments. Security, privacy and trust mechanisms required for pervasive agentic AI and AI‑infused 6G networks were mentioned but not detailed. Timeline and concrete roadmap for the rollout of 6G and the associated AI‑enabled network infrastructure remain unspecified. Governance and ownership models for the new central agent platform (replacing OS/app stores) were not addressed.
Suggested compromises
None identified
Thought Provoking Comments
Future will not be built by AI, but by humans who can confidently put AI to use.
Sets a human‑centric framing for the whole discussion, reminding the audience that technology is a tool rather than a driver, and establishes a tone of responsibility and agency.
Serves as an opening pivot that frames the subsequent talk; it primes listeners to view Qualcomm’s AI roadmap as an enabler for people, not a replacement, and influences Amon’s later emphasis on agents that act on behalf of human intent.
Speaker: Speaker 1
AI is changing the human‑computer interface because we don’t have to learn how to use a computer; the AI understands what we see, hear, say, and write.
Identifies a fundamental shift from explicit UI design to implicit, multimodal interaction, highlighting a paradigm change in how users will engage with technology.
Triggers a transition from describing hardware capabilities to discussing new interaction models; it leads Amon to introduce the concept of “agents” as the next interface layer, steering the conversation toward software‑centric value creation.
Speaker: Cristiano Amon
The smartphone will be replaced by an agent – an entity that understands human intentions and can act across devices, freeing us from the constraints of hardware and app stores.
Proposes a radical re‑imagining of the mobile ecosystem, moving the locus of value from devices and platforms to autonomous agents, thereby challenging the existing OS‑app business model.
Creates a turning point where the discussion moves from incremental improvements to a disruptive vision; audience attention shifts to the implications for developers, OEMs, and the broader value chain.
Speaker: Cristiano Amon
The cloud‑vs‑edge debate is the wrong way to look at it; intelligence will be distributed across cloud, near‑edge, and on‑device, transparently to the user.
Refocuses the technical debate from a binary choice to a holistic, distributed architecture, emphasizing seamless integration rather than competition between cloud and edge.
Redirects the technical narrative, prompting listeners to think about system design as a continuum; it sets up the later discussion of 6G as an AI‑infused network that leverages this distributed model.
Speaker: Cristiano Amon
6G’s biggest contribution won’t just be speed; it will embed AI into the network itself, creating a large‑scale sensing fabric that can map the environment and feed context to agents.
Elevates the conversation from incremental bandwidth gains to a vision where the network becomes an intelligent sensor platform, fundamentally altering telecom’s role in AI ecosystems.
Marks a major shift in the talk’s scope—from device‑level AI to network‑level AI—opening new topics such as autonomous‑driving, drone detection, and industry‑wide sensing services.
Speaker: Cristiano Amon
India’s massive mobile data consumption and its leapfrogging of fixed‑line internet make it a prime arena for AI‑driven transformation across manufacturing, smart cities, healthcare, education, and agriculture.
Connects the global technological vision to a concrete regional opportunity, illustrating how AI and 6G can drive economic and social impact at scale.
Steers the discussion toward practical implications and policy considerations; it invites the audience to think about deployment, partnership, and societal benefits, rounding out the previously abstract vision.
Speaker: Cristiano Amon
Agents will replace a lot of the OSs and applications – they become the new platform for delivering services.
Extends the earlier agent concept to claim that entire software ecosystems (OS, app stores) will be subsumed, highlighting a potential industry‑wide disruption.
Deepens the analysis of the business impact, prompting listeners to reconsider development strategies, monetization models, and the future role of traditional software layers.
Speaker: Cristiano Amon
Overall Assessment

The discussion pivots around a series of bold, forward‑looking statements that progressively reshape the audience’s mental model—from a human‑centric reassurance, through a redefinition of interaction via multimodal agents, to a distributed AI architecture that blurs the line between cloud, edge, and device, and finally to an AI‑infused 6G network that serves as a pervasive sensing layer. Each thought‑provoking comment acted as a catalyst, opening new thematic avenues (interaction design, value‑chain disruption, architectural strategy, regional impact) and steering the conversation toward a holistic vision of AI as an omnipresent, context‑aware fabric. Collectively, these remarks transformed the session from a product showcase into a strategic narrative about how Qualcomm envisions the next generation of computing and connectivity, influencing both technical and business perspectives of the audience.

Follow-up Questions
How will AI agents replace traditional operating systems and application stores in the mobile ecosystem?
Understanding this shift is crucial for anticipating changes in the value chain, developer models, and revenue streams.
Speaker: Cristiano Amon
What are the technical and architectural challenges of distributing AI workloads across cloud, edge, and on‑device processing?
A clear roadmap is needed to ensure low latency, context relevance, and seamless user experience.
Speaker: Cristiano Amon
How can privacy and security be ensured when agents collect massive personal data from devices such as smart glasses?
Protecting user data is essential for trust, regulatory compliance, and widespread adoption of pervasive AI agents.
Speaker: Cristiano Amon
What is the expected timeline and roadmap for 6G deployment and its AI‑enabled sensing capabilities?
Stakeholders need a concrete schedule to plan investments, standards development, and ecosystem readiness.
Speaker: Cristiano Amon
What standards and protocols are required for AI‑driven telecom networks to ensure interoperability across vendors and regions?
Common frameworks will enable a global AI‑enhanced network and avoid fragmentation.
Speaker: Cristiano Amon
How can India capitalize on AI‑driven transformation in manufacturing, smart cities, healthcare, education, and agriculture?
Identifying concrete use‑cases and policy measures will help India leverage its large data consumption and become a global AI hub.
Speaker: Cristiano Amon
What are the requirements for building large‑scale AI training data pipelines that incorporate personal device data while respecting privacy?
Effective, privacy‑preserving data collection is needed to create context‑aware agents without compromising user rights.
Speaker: Cristiano Amon
How will integration of AI agents across heterogeneous devices (phones, glasses, wearables) be managed in terms of user experience and developer ecosystems?
Coherent cross‑device interaction is vital for adoption and for developers to create consistent agent‑based applications.
Speaker: Cristiano Amon
What are the implications of AI agents for network traffic management and spectrum utilization in future 6G networks?
AI‑generated traffic patterns could affect capacity planning and require new management techniques.
Speaker: Cristiano Amon
How can Qualcomm’s hardware portfolio—from sub‑2 mW chips to 2 000 W data‑center processors—be optimized for diverse AI workloads across edge and cloud?
Aligning chip design with the varied performance, power, and latency needs of edge and data‑center AI is key to Qualcomm’s strategy.
Speaker: Cristiano Amon

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session featured Giordano Albertazzi, CEO of Vertiv, who emphasized that while AI discussions often highlight software capabilities, the physical data-center infrastructure that powers AI is equally critical [9-15]. He outlined Vertiv’s role in delivering power, cooling and overall data-center infrastructure, noting the company’s evolution from Emerson to an independent, publicly-traded firm with deep industry expertise [20-22].


Albertazzi explained that the rapid adoption of GPUs has driven extreme rack densification, with power per rack rising from 10-20 kW to potentially 150 kW or even a megawatt, fundamentally altering data-center design [30-33]. This shift requires a coordinated “body” of power-train and thermal systems rather than isolated components, and Vertiv aims to provide fully orchestrated solutions that integrate power, cooling and heat-reuse [45-48][70-76]. He highlighted the move from individual servers to AI pods and ultimately to treating an entire data center as a single compute unit capable of gigawatt-scale workloads [78-84].


To meet the speed and scale demands, Vertiv offers prefabricated, factory-tested modules-such as the “OneCore” system-that can cut deployment time by up to 85 % compared with traditional builds [91-97][118-124]. Speaker 3 contrasted conventional sequential construction with prefabricated modular approaches, noting that Vertiv’s solutions combine the benefits of both methods [118-122]. The company’s close collaboration with NVIDIA enables reference designs that match AI workloads and accelerates market adoption [54-58].


Albertazzi stressed India’s strategic importance because of abundant power, favorable demographics and existing Vertiv presence, and announced plans to expand capacity and invest further in the region [52-53][98-108][110-112]. He noted that faster, larger deployments create challenges in time and scale, which Vertiv addresses through integrated, resilient designs [49-51][89-90]. He also described Vertiv’s “future-resilient” architecture that can evolve as AI power densities increase [88-90]. Concluding with optimism, he asserted that the evolving infrastructure will sustain AI growth worldwide and that Vertiv is positioned to lead this transformation [113-115].


Keypoints


Major discussion points


The physical infrastructure (power and cooling) is the foundation that makes AI possible.


Albertazzi stresses that the “very important physical part of AI… makes AI actually possible” and that Vertiv’s role is to supply the best power-train and thermal chain for AI workloads [13-18][30-34].


AI workloads are driving extreme densification, requiring new power-density and voltage architectures.


He notes that racks are moving from 10-20 kW to 30-150 kW and even a megawatt per rack, and that the industry is migrating toward 800-volt DC power to handle this density [30-33][66-67].


Vertiv is promoting modular, pre-engineered solutions (OneCore/OneVert) to speed up deployment and cut labor.


The “fully pre-engineered, defined data center” (OneVert) and “repeatable converged infrastructure” that can scale from 12.5 MW to gigawatts are highlighted, along with prefabrication that can reduce build time by up to 85 % [60-62][86-88][96-97].


India is positioned as a strategic hub for AI data-center growth.


Albertazzi points to India’s abundant power, favorable demographics, and existing Vertiv presence, stating the company will “invest… expand capacity” and sees the country as a global AI hub [98-108].


Partnership with NVIDIA and a shift from server-centric to pod-/data-center-as-a-computer architectures.


He praises the collaboration with NVIDIA on reference designs and explains that the unit of compute is evolving from individual servers to AI pods and ultimately to an entire data center operating as a single computer [54-58][78-84].


Overall purpose / goal


The presentation aims to convince the audience that robust, high-density power and cooling infrastructure is the critical enabler for the AI revolution, to showcase Vertiv’s innovative, modular solutions (especially OneCore/OneVert) that can meet these demands quickly and at scale, and to underline the company’s strategic focus on India as a growth market while highlighting its partnership with NVIDIA.


Overall tone


The tone is consistently upbeat, confident, and promotional. Albertazzi begins with enthusiasm about AI’s possibilities, moves into technical detail with authority, then shifts to an optimistic, forward-looking stance when discussing India and future investments. Throughout, the language remains positive (“thrilled,” “optimistic,” “excited”) and never turns critical or defensive. No major tonal shift occurs; the optimism intensifies toward the end as he emphasizes market opportunities and partnerships.


Speakers

Speaker 1


– Role/Title: Moderator / event host who introduces speakers [S4][S6]


– Area of expertise:


Speaker 3


– Role/Title:


– Area of expertise:


Giordano Albertazzi


– Role/Title: Chief Executive Officer, Vertiv (Representative from Vertiv) [S7]


– Area of expertise: Digital infrastructure solutions for data centers, AI-related power and cooling systems


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights

Introduction (Speaker 1) – Speaker 1 opened the session by introducing Mr Giordano Albertazzi, chief executive officer of Vertiv, a global provider of digital-infrastructure solutions for data centres and communication networks, and noted Vertiv’s ambition to accelerate innovation and support critical applications worldwide [1-3].


Physical layer of AI (Albertazzi) – Albertazzi observes that most AI conversations celebrate what AI can do, especially in India, while the “very important physical part of AI… makes AI actually possible” is often ignored - a point he makes early in his remarks about the need for power, cooling and overall data-centre infrastructure [9-15].


Vertiv background (Albertazzi) – He outlines Vertiv’s heritage: originally part of Emerson Electric, the company has been an independent, publicly-traded entity for almost a decade and brings decades of expertise in delivering the physical layer that enables the rapidly evolving AI-IT stack [18-22].


Extreme densification (Albertazzi) – Rack power density has jumped from the historic 10-20 kW range to 30-150 kW today, with future designs envisaging up to 1 MW per rack, fundamentally altering data-centre design and power-heat management requirements [30-34].


Brain-body analogy & orchestrated infrastructure (Albertazzi) – Using a biological analogy, Albertazzi explains that just as the brain needs a body, the AI “brain” (the IT stack) requires a well-orchestrated “body” of power-train and thermal systems; the power chain must move from grid to chip, and the thermal chain must handle heat extraction, rejection and reuse - all as an interoperable whole [38-48][70-76]. To accommodate the higher densities, Vertiv is transitioning to 800-V DC distribution, which better supports the increased power loads [66-76].


Shift in compute unit (Albertazzi) – He notes that the basic compute unit is shifting from the traditional server to an AI pod, and ultimately to the entire data centre operating as a single computer capable of gigawatt-scale workloads [78-84].


Vertiv’s modular, pre-engineered solution (Albertazzi) – Vertiv showcases a modular data-centre solution – referred to as “one vertigo, one core” (OneVert/OneCore) – that provides a repeatable, converged infrastructure and can be scaled from 12.5 MW up to gigawatt capacities, delivering a “future-resilient” platform [60-62][86-88].


Prefabrication advantage (Albertazzi) – The VertiSmart Run prefabrication methodology can cut deployment timelines by roughly 85 %, delivering an order-of-magnitude speed-up compared with traditional, labour-intensive builds [91-97].


India as a strategic AI hub (Albertazzi) – Albertazzi highlights India’s abundant power supply, favourable demographics, and Vertiv’s long-standing presence as reasons to view the country as a global AI centre, and he announces plans to expand capacity and increase investment in the Indian market [52-53][98-108][110-112].


Collaboration with NVIDIA (Albertazzi) – Joint reference designs with NVIDIA are being co-developed to align infrastructure with AI workloads, positioning the partnership as a driver of market leadership and accelerated adoption of AI-optimised data centres [54-58].


Future-resilient design (Albertazzi) – He stresses the need for “future-resilient” designs that can withstand rapid deployment pressures while maintaining reliability, noting that Vertiv’s integrated approach addresses both time-to-market and scale constraints [49-51][89-90].


Closing (Speaker 1 & Albertazzi) – In closing, Albertazzi expresses strong optimism about the role of robust infrastructure in sustaining AI’s growth, thanks the audience for their attention, and is acknowledged by the moderator for his impactful address [113-115][117].


Session transcriptComplete transcript of the session
Speaker 1

Well, ladies and gentlemen, now it’s my pleasure to invite our next speaker, Mr. Giordano Albertazzi, who is the chief executive officer of Vertiv, a global company that provides digital infrastructure solutions for data centers, communication networks. Under his leadership, Vertiv is advancing its role as a global industry leader by accelerating innovation, strengthening technology leadership, and enabling the digital infrastructure that powers critical applications worldwide. Ladies and gentlemen, please welcome Mr. Giordano Albertazzi.

Giordano Albertazzi

Thank you very much. The clicker? Oh, yeah, here. Better with the clicker. Good afternoon, everyone. And it’s absolutely a pleasure and an honor being on this stage where so many distinguished presenters. In the last two days, I’ve had the opportunity to talk about AI. An astonishing thing to me is that the majority of the AI conversations, as it should be, are about what AI can do. Very interesting presentation, just finished, tells about all the beautiful things that AI can do and particularly what AI can do here in India. But when we talk AI, we also talk about data centers. But let me go then to the physical part of AI, not just what AI can do for us.

There is an important, very important physical part of AI that sometimes is overlooked. And it shouldn’t, because it’s that physical part that makes AI actually possible. So I’ll talk about the physical part today. I will talk about the power. The cooling, the data center infrastructure. Vertiv and myself, with Vertiv, have been in the industry for decades. Well, Vertiv longer than me. It used to be part of Emerson, Emerson Electric, and we are almost 10 years as an independent company now publicly traded in New York. But what we do is really make sure that that physical part is provided with the best technology that supports the continuous evolution of the AI IT stack as those rapidly, almost exponentially, and I’m talking almost exponentially from a mathematical standpoint, evolve.

And it’s no easy task, a task that we would do very well because we know the space a lot. We have a lot of innovation. But there are several dimensions. to this. One is the extreme densification. Now, we all know what GPUs are. Probably two years ago, majority of people didn’t have any clue about what a GPU is. But now, GPU, NVIDIA is absolutely central to everything, all the conversation about AI. Well, that phenomenal evolution from a technology standpoint is changing the DNA of a data center. What used to be a rack with IT inside 10, 15, 20 kilowatt per rack is rapidly becoming more dense and with more power and heat to dissipate in it. This is going to 30, 50, 150 kilowatts per rack all the way in the possible future.

One megawatt per rack. That’s a lot of power in a single rack. The design of a data center is changing dramatically. As this design changes, of course, also the technology that supports it needs to change. But let me go back to AI, artificial intelligence. Let me go back to, and let me draw a parallel. Human intelligence. Human intelligence happen in the brain. But the brain doesn’t survive without a body. What we are, what we do at Vertiv, make that body, provide that technology for that body so that the brain can function, and that brain is the IT stack. But not only that brain can function, but also can produce intelligence. And that’s what an AI does.

That’s what an AI factory, an AI data center is doing. But just like the body, historically, data centers and data center engineering was viewed as disparate systems coming together. Now, we cannot think about a human body or anybody as individual parts, a chiller, a liquid cooling unit, an interruptible power supply, or whatever else in the powertrain or thermal chain you can think of. Everything needs to be orchestrated. Everything needs to be interoperable. Everything must be thought of as one thing. And that’s what we do in a world that is extraordinarily challenging, but it’s a challenge that we, of course, respond to very successfully, challenging in terms of time of deployment and in terms of scale of deployment.

Okay. Data centers need to be. developed faster and faster and are becoming bigger and bigger. You heard that. It is about data centers. India is certainly privileged from an AI standpoint also because there is a lot of power available that can be harnessed for more and more powerful and larger data centers. Now, as that happens, again, if you think, go back to my analogy of the body, you think about a system, you think about everything that is the body of artificial intelligence, then it is about changing the way we build that body from one piece at a time with a lot of activity happening on site, laborious, hard from a quality standpoint, to most integrated at factory level and deployment.

As NVIDIA continues to lead the world in terms of technology, in terms of IT stack, but also in terms of thought process for the infrastructure. And it’s something that we do a lot together. Well, then it is not just about the infrastructure and the speed and the size and the scale. It’s also about optimizing the infrastructure with reference designs that exactly target that type of application. So we, of course, are thrilled and always honored to partner with NVIDIA in this adventure and venture and lead the market in this respect. So here you have an example of what we call one vertigo, one core. There’s an example of a fully pre -engineered, defined data center. But when we talk about the body, the body of AI, the data center, then let’s talk very simply.

We talk about three fundamental, fundamental elements of that body. One is the powertrain. So everything that goes from the grid, if you will, for your utility, takes that power all the way to the chip. That power infrastructure is changing, is evolving as the power density changes. And the current architectures are migrating towards, over time, what is an 800 -volt DC power infrastructure. I’m going technical on you. Some of you I know are very technical, so I’m not afraid about that. But I will not go deep. So everything you see on the left side of this is exactly a representation of that powertrain. So bring the energy, take that energy to the chip. Then the chip and all the electronic components in a server generate heat.

And that heat can be very dense. And require very, very advanced… cooling mechanisms. And that’s the beginning of what we like to call a thermal chain that starts, and it’s what you see on the right side of this chart from the chip all the way to the heat extraction, the heat then rejection, or even more importantly, and more extensively so, is the heat reuse. So this is the system, the fundamental systems of this body. But again, it’s not just about the components of the system, it’s how the entire system works. And more and more, we see that when we think about the AI IT infrastructure, what used to be thought of as a server at a time is becoming really an AI, AI.

pod, an AI unit at a time. The unit of compute is no more the server. It is the pod. Unit of compute is not even the pod. It’s the entire data center operating as one single computer. A unit of compute that can go all the way to gigawatts. So it is about making sure, and I believe we do it very well, that I say uniquely well, but of course I root for ourselves. It is about making that infrastructure available at scale and in a very easy modular to deploy fashion. And that’s what we do. So a repeatable converged infrastructure major. So we have a lot of building blocks that can go from a 12 .5 megawatts all the way to gigawatts, all the way to gigawatts.

So clearly it’s not just about building that infrastructure, but that infrastructure over time needs to be, like we like to say, future resilient. Some people, like myself, have been in the industry of data center for quite some time, and it’s fascinating the speed at which things happen. And this speed is also enabled by new solutions that make prefabricated and very fast to deploy part of data centers that used to be very, very laborious. Take a data. It’s empty when the building is new. You have to fill it with power, with cooling, with cables. You have to put the racks. very laborious and time consuming. Time to token is of the essence. A prefabrication, for example, with what we do with Verti Smart Run, reduces the time to deploy almost 85%, almost an order of magnitude.

So the industry is changing, not only in scale and in density, but also in the way things are done and deployed. Let me take a different angle now and focus on India. India clearly central to the AI evolution revolution and central certainly in terms of the infrastructure that is being built and the infrastructure that will be built in the future and in the coming years. This infrastructure and the speed at which this infrastructure will be built, of course, will depend, as I was saying, by the ability of the likes of Vertiv, but certainly Vertiv, given our prominent position also in India, to really enable this at scale and at speed in the ways that I explained.

So Vertiv in India has a long tradition. We’ve been here for decades. We have what I believe is an awesome team and awesome partnerships. And now this forum, these sessions, these few days convinced me even more of the importance of India as a place. A place to invest. And invest we will. We are expanding our capacity and will continue to expand capacity. We see India certainly as an extremely promising market. as a hub for AI, not only for India, but globally. So it has got the power availability. Certainly India has got the right demographics. So I couldn’t be more excited about the business in India. I couldn’t be more excited about what we’re doing in India and what our partners are doing in India.

So with that, I’m extremely optimistic. I’m a big optimist about what AI will bring, as we heard. And with that, thank you very much. Thank you.

Speaker 1

Thank you so much, Mr. El -Battazi, for your impactful address and also for…

Speaker 3

Data centers have, up until now, been usually constructed in one of two ways. Traditional data center build follows a sequential process, materials and equipment arriving individually on site, with the build progressing from the ground up. Alternatively, prefabricated modular construction can offer many advantages, such as quicker deployments and risk reduction. Vertiv offers many solutions in this space. However, in the age of increasing IT loads powered by artificial intelligence, there’s another option that combines the advantages of both. Vertiv OneCore. Vertiv power and thermal infrastructure building blocks are inserted into a brand new Vertiv -supplied steel building shell, or an existing building. Infrastructure building blocks are made in controlled factory environments and tested before construction. The system is also equipped with a new, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient,

Related ResourcesKnowledge base sources related to the discussion topics (13)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Speaker 1 introduced Giordano Albertazzi as chief executive officer of Vertiv, a global provider of digital‑infrastructure solutions for data centres and communication networks.”

The knowledge base identifies Giordano Albertazzi as CEO of Vertiv discussing critical physical infrastructure for AI, confirming his role and the company’s focus on data-centre solutions [S8] and [S7].

Confirmedhigh

“Albertazzi observes that most AI conversations celebrate what AI can do, especially in India, while the “very important physical part of AI… makes AI actually possible” is often ignored.”

Albertazzi’s emphasis on the overlooked physical infrastructure that enables AI is directly stated in the knowledge base [S8].

Additional Contextmedium

“Rack power density has jumped from the historic 10‑20 kW range to 30‑150 kW today, with future designs envisaging up to 1 MW per rack.”

The source notes the evolution from a few kilowatts per rack to 10-30 kW and cites current deployments in India at around 80 kW per rack, confirming the upward trend but not the 1 MW projection [S50].

Confirmedhigh

“India’s abundant power supply, favourable demographics, and Vertiv’s long‑standing presence make the country a global AI centre; Vertiv plans to expand capacity and increase investment in the Indian market.”

Albertazzi’s remarks about India as a strategic AI hub and Vertiv’s activities there are reflected in the knowledge base, which highlights his focus on India’s physical-layer potential and ongoing high-density rack deployments [S8] and [S50]; broader commentary on India’s AI opportunities appears in [S47].

External Sources (50)
S1
Building the Workforce_ AI for Viksit Bharat 2047 — -Speaker 1- Role/Title: Not specified, Area of expertise: Not specified -Speaker 3- Role/Title: Not specified, Area of …
S2
S3
Advancing Scientific AI with Safety Ethics and Responsibility — – Speaker 1- Speaker 2- Speaker 3 – Speaker 1- Speaker 3- Moderator
S4
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S5
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S6
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S7
S8
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — – Giordano Albertazzi- Announcer – Giordano Albertazzi- Video presentation Artificial intelligence | Information and c…
S9
https://dig.watch/event/india-ai-impact-summit-2026/heterogeneous-compute-for-democratizing-access-to-ai — That’s the edge cloud. And as you go deeper from there onwards, then you have the data centers. It then mitigates the ov…
S10
The Global Power Shift India’s Rise in AI & Semiconductors — The panelists emphasized that true AI leadership requires alignment across four key pillars: silicon, software, systems,…
S11
From KW to GW Scaling the Infrastructure of the Global AI Economy — Good morning to all of you. As Rakesh has already introduced, two companies are planning for a lot of things together. A…
S12
WS #53 Promoting Children’s Rights and Inclusion in the Digital Age — – Speaker 3 (Janatu): Department of Public Administration, Kumile University Speaker 3: Hello, everyone. My topic is…
S13
Internet Society’s Collaborative Leadership Exchange (CLX) | IGF 2023 Day 0 Event #95 — Speaker 3:I’m Jeremy. I’m from Myanmar. Today I just would like to point out the digital guidelines about the online gov…
S14
https://dig.watch/event/india-ai-impact-summit-2026/inclusive-ai-starts-with-people-not-just-algorithms — So we’re going to give like 30 seconds to each of the panelists as they close. I mean, I think on learning you just star…
S15
Any other business /Adoption of the report/ Closure of the session — In summary, the speaker artfully blended expressions of gratitude with recognition of collaborative efforts and a call f…
S16
Open Mic & Closing Ceremony — 9. Recognition and Appreciation: Hajia Sani: Hmm. Another round of applause, please. Another round of applause. Thank y…
S17
Masterclass#1 — Sherif Hashem :Sure, I’d like to thank all the speakers for such excellent and comprehensive presentations, but I’d like…
S18
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S19
From summer disillusionment to autumn clarity: Ten lessons for AI — Overall, what’s notable in all these political developments is pragmatism. The lofty narratives of last year – like fear…
S20
The Innovation Beneath AI: The US-India Partnership powering the AI Era — So what is going to be scarce in the times to come is not electrification, as Roshani said. We have enough math works wh…
S21
Building Climate-Resilient Systems with AI — How do you execute on it? How do you start delivering the outcome that I think we all are looking for? So that’s the kin…
S22
AI adoption leaves workers exhausted as a new study reveals rising workloads — Researchers from UC Berkeley’s Haas School of Businessexaminedhow AI shapes working habits inside a mid-sized technology…
S23
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — India’s unique position—combining technical talent, diverse datasets, a vibrant startup ecosystem, and supportive policy…
S24
Building the Next Wave of AI_ Responsible Frameworks & Standards — What is interesting is India is uniquely positioned in this global AI discourse. Most global AI frameworks are designed …
S25
Book launch: What changes and remains the same in 20 years in the life of Kurbalija’s book on internet governance? — ## The Shift from Knowledge to Data in Policy Language 3. **Processing Architecture Shift**: The transition from CPU-ba…
S26
Nvidia partners with Reliance and Tata to expand AI presence in India’s growing ecosystem — Nvidia, a semiconductor company in California,has revealed plans for partnerships with major Indian corporations, Relian…
S27
Panel Discussion: Europe’s AI Governance Strategy in the Face of Global Competition — Moderator Alexander E. Brunner opened with a provocative observation based on recent conversations with technology leade…
S28
NVIDIA powers a new wave of specialised AI agents to transform business — Agentic AIhas entereda new phase as companies rely on specialised systems instead of broad, one-size-fits-all models. Op…
S29
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — The speaker stressed that modern data centers can no longer be viewed as disparate systems but must be orchestrated as i…
S30
Opportunities of Cross-Border Data Flow-DFFT for Development | IGF 2023 WS #224 — However, it is crucial to address the trust deficit between users and companies. To achieve this, public policy framewor…
S31
Regulating Open Data_ Principles Challenges and Opportunities — Digital ecosystems simply do not function in silos. However, enabling data to move across borders should not mean that c…
S32
From KW to GW Scaling the Infrastructure of the Global AI Economy — Project timelines have compressed from 18 months in cloud world to 4-6 months in GPU world, requiring faster capacity bu…
S33
The Innovation Beneath AI: The US-India Partnership powering the AI Era — So very much working on that. And on your question from an innovation perspective, well, we all know the hype cycle. And…
S34
Building Climate-Resilient Systems with AI — How do you execute on it? How do you start delivering the outcome that I think we all are looking for? So that’s the kin…
S35
From KW to GW Scaling the Infrastructure of the Global AI Economy — And it’s going to be a system approach. System. Systems. Think systems. we as an industry have thought boxes for too lon…
S36
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-giordano-albertazzi — There is an important, very important physical part of AI that sometimes is overlooked. And it shouldn’t, because it’s t…
S37
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — “And the current architectures are migrating towards, over time, what is an 800 -volt DC power infrastructure.”[50]. “An…
S38
AI adoption leaves workers exhausted as a new study reveals rising workloads — Researchers from UC Berkeley’s Haas School of Businessexaminedhow AI shapes working habits inside a mid-sized technology…
S39
The Global Power Shift India’s Rise in AI & Semiconductors — The panelists emphasized that true AI leadership requires alignment across four key pillars: silicon, software, systems,…
S40
Indias Roadmap to an AGI-Enabled Future — Dua argued that India could become a global compute hub, potentially processing 40-50% of the world’s data by leveraging…
S41
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — Data sovereignty policies requiring local data storage are essential to drive domestic data center investment and capita…
S42
Welcome Address — India positions itself as a central hub of technology talent, leveraging a strong IT background and dynamic startup ecos…
S43
Book launch: What changes and remains the same in 20 years in the life of Kurbalija’s book on internet governance? — 3. **Processing Architecture Shift**: The transition from CPU-based to GPU-based computing, fundamentally altering how c…
S44
Nvidia partners with Reliance and Tata to expand AI presence in India’s growing ecosystem — Nvidia, a semiconductor company in California,has revealed plans for partnerships with major Indian corporations, Relian…
S45
Intel to design custom CPUs as part of NVIDIA AI partnership — The two US tech firms, NVIDIA and Intel,have announceda major partnership to develop multiple generations of AI infrastr…
S46
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — . . . . . . . . . . . . . . one of our keynote speakers, they said autonomous weapons are going to AI -based autonomous …
S47
From India to the Global South_ Advancing Social Impact with AI — And I think with the current government’s focus on multiple domains like logistics, maybe marine, aeronautics, aviation,…
S48
https://dig.watch/event/india-ai-impact-summit-2026/indias-roadmap-to-an-agi-enabled-future — They are not connected end to end in terms of building a digital twin of this electric system, right? We built something…
S49
Comprehensive Summary: The Future of Robotics and Physical AI — And that actually is very important for roboticists to understand. There’s a lot of power in understanding the physical …
S50
Keynote-Olivier Blum — For those who are not very familiar with what is a data center, we are talking, about a couple of years, about a couple …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
G
Giordano Albertazzi
7 arguments123 words per minute1705 words830 seconds
Argument 1
Extreme GPU densification drives high power per rack (Giordano Albertazzi)
EXPLANATION
The rapid adoption of GPUs for AI workloads is dramatically increasing the power density of racks. What used to be 10‑20 kW per rack is now moving toward 30‑150 kW and even up to a megawatt per rack, creating new challenges for data‑center design.
EVIDENCE
Albertazzi describes the shift from modest rack power levels (10-20 kW) to much higher densities, noting that a rack could reach 30, 50, 150 kW and potentially one megawatt as GPU usage expands [25-33].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Albertazzi’s presentation notes the shift from traditional 10-20 kW racks to 30-150 kW and even megawatt-level racks as AI GPUs proliferate, corroborated by the keynote summary in [S8].
MAJOR DISCUSSION POINT
Physical infrastructure demands of AI workloads
Argument 2
Shift to 800‑V DC power and advanced cooling needed (Giordano Albertazzi)
EXPLANATION
To support the higher power densities, data‑center power architectures are moving toward 800‑volt DC distribution. At the same time, the resulting heat loads require sophisticated cooling and thermal‑chain solutions.
EVIDENCE
He explains that the powertrain is migrating to an 800-V DC infrastructure and that the heat generated by dense compute requires advanced cooling mechanisms and heat-reuse strategies [64-67][73-75].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He emphasizes migration to 800-volt DC distribution and the need for sophisticated cooling and heat-reuse solutions, as described in the same keynote coverage [S8].
MAJOR DISCUSSION POINT
Physical infrastructure demands of AI workloads
Argument 3
Components must operate as a unified, interoperable body (Giordano Albertazzi)
EXPLANATION
Albertazzi argues that data‑center subsystems—power, cooling, UPS, etc.—cannot be treated as separate pieces. They must be orchestrated and interoperable, functioning as a single integrated “body” that supports AI workloads.
EVIDENCE
He uses a body analogy, stating that data-center components need to be orchestrated, interoperable, and thought of as one thing rather than disparate systems [45-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The talk uses a body analogy, arguing that power, cooling, UPS and other subsystems must be orchestrated as a single entity; this integrated view is highlighted in the keynote analysis [S8].
MAJOR DISCUSSION POINT
Integrated, orchestrated data‑center design
AGREED WITH
Speaker 3
Argument 4
Compute unit evolving from server to pod to whole‑center scale (Giordano Albertazzi)
EXPLANATION
The traditional server is being replaced by AI pods, and ultimately the entire data centre is treated as a single computer. This shift reflects the need to handle gigawatt‑scale compute for AI.
EVIDENCE
He notes that the unit of compute has moved from the server to the AI pod and now to the whole data centre operating as one computer, capable of gigawatt-level power [78-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Albertazzi describes the progression from individual servers to AI pods and finally to the entire data centre acting as one computer, a shift documented in the keynote summary [S8].
MAJOR DISCUSSION POINT
Integrated, orchestrated data‑center design
Argument 5
Vertiv Smart Run prefabrication reduces deployment time by ~85% (Giordano Albertazzi)
EXPLANATION
Prefabricated, factory‑built data‑center modules can dramatically shorten construction schedules. Vertiv’s Smart Run solution claims to cut deployment time by roughly 85%, enabling faster scaling of AI infrastructure.
EVIDENCE
He describes how prefabrication, exemplified by Verti Smart Run, reduces the time to deploy a data centre by almost 85%, an order-of-magnitude improvement over traditional builds [91-97].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He cites Vertiv’s Smart Run modular, factory-built solution cutting deployment schedules by roughly 85%, a claim supported by the discussion of prefabricated construction benefits in [S8].
MAJOR DISCUSSION POINT
Modular and prefabricated construction for rapid deployment
AGREED WITH
Speaker 3
Argument 6
India’s power availability and demographics position it as a global AI hub (Giordano Albertazzi)
EXPLANATION
Albertazzi highlights India’s abundant power resources and favorable demographics as key factors that make the country an attractive location for large‑scale AI data centres. He sees India as a strategic hub not only for domestic AI but for the global market.
EVIDENCE
He points out that India is privileged from an AI standpoint because of abundant power and the right demographics, emphasizing its role as a global AI hub [52-53][109-111].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel on India’s AI rise highlights the country’s abundant power resources, talent pool and favorable demographics as key factors for becoming a global AI hub, aligning with Albertazzi’s point [S10].
MAJOR DISCUSSION POINT
India’s strategic role in AI data‑center growth
Argument 7
Vertiv’s long‑standing presence and capacity‑expansion plans underscore commitment to India (Giordano Albertazzi)
EXPLANATION
Vertiv has operated in India for decades, building a strong team and partnerships. The company plans to expand capacity further, reinforcing its commitment to making India a major AI data‑center hub.
EVIDENCE
He references Vertiv’s decades-long presence, an “awesome team,” ongoing capacity expansion, and the view of India as an extremely promising market and AI hub [101-108].
MAJOR DISCUSSION POINT
India’s strategic role in AI data‑center growth
S
Speaker 3
1 argument156 words per minute149 words57 seconds
Argument 1
Vertiv OneCore combines modular building blocks with a steel shell for fast builds (Speaker 3)
EXPLANATION
OneCore integrates Vertiv’s power and thermal infrastructure modules into a pre‑engineered steel building shell, whether new or retrofitted. This approach merges the speed of modular construction with the robustness of a full building envelope.
EVIDENCE
The speaker explains that Vertiv OneCore inserts power and thermal building blocks into a Vertiv-supplied steel building shell, with components manufactured and tested in a factory before on-site installation [123-125].
MAJOR DISCUSSION POINT
Modular and prefabricated construction for rapid deployment
AGREED WITH
Giordano Albertazzi
S
Speaker 1
1 argument131 words per minute88 words40 seconds
Argument 1
Appreciation and recognition of the impactful address (Speaker 1)
EXPLANATION
Speaker 1 thanks the presenter for his impactful address, acknowledging the value of the contribution to the forum.
EVIDENCE
The closing remark thanks Mr. El-Battazi for his impactful address [117].
MAJOR DISCUSSION POINT
Closing acknowledgment
Agreements
Agreement Points
Modular and prefabricated construction dramatically shortens deployment time for AI data centers
Speakers: Giordano Albertazzi, Speaker 3
Vertiv Smart Run prefabrication reduces deployment time by ~85% (Giordano Albertazzi) Vertiv OneCore combines modular building blocks with a steel shell for fast builds (Speaker 3)
Both speakers stress that Vertiv’s modular, factory-built solutions – Smart Run (which cuts deployment time by about 85%) and OneCore (which inserts pre-engineered power and thermal blocks into a steel building shell) – enable much faster roll-out of large AI-driven data centres [91-97][123-125].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry analyses note that project timelines have compressed from 18 months to 4-6 months, and reference designs with prefabricated modules enable faster capacity building by shifting testing and integration off-site [S32].
Data‑center subsystems must be treated as a single, interoperable system
Speakers: Giordano Albertazzi, Speaker 3
Components must operate as a unified, interoperable body (Giordano Albertazzi) Vertiv OneCore combines modular building blocks with a steel shell for fast builds (Speaker 3)
Albertazzi uses a body analogy to argue that power, cooling, UPS and other components need to be orchestrated as one entity, while Speaker 3 describes OneCore as a solution that physically integrates power and thermal modules into a single building envelope, reflecting the same integrated-system perspective [45-48][123-125].
POLICY CONTEXT (KNOWLEDGE BASE)
Thought leaders stress that modern data centers should be orchestrated as integrated, interoperable units rather than disparate components [S29], echoing broader policy calls for open, standards-based, interoperable frameworks in digital ecosystems [S30][S31].
Similar Viewpoints
Both emphasize that a coherent, tightly integrated infrastructure – rather than a collection of disparate pieces – is essential for supporting the high‑density AI workloads of the future [45-48][123-125].
Speakers: Giordano Albertazzi, Speaker 3
Components must operate as a unified, interoperable body (Giordano Albertazzi) Vertiv OneCore combines modular building blocks with a steel shell for fast builds (Speaker 3)
Both present Vertiv’s modular, factory‑built approaches as the answer to the speed and scale challenges posed by AI‑driven data‑center expansion [91-97][123-125].
Speakers: Giordano Albertazzi, Speaker 3
Vertiv Smart Run prefabrication reduces deployment time by ~85% (Giordano Albertazzi) Vertiv OneCore combines modular building blocks with a steel shell for fast builds (Speaker 3)
Unexpected Consensus
Overall Assessment

The speakers converge on two main ideas: (1) modular, prefabricated construction (Smart Run, OneCore) is a key accelerator for AI data‑center deployment, and (2) the physical infrastructure must be conceived as a unified, interoperable system rather than isolated components. These points reflect a clear consensus on the technical and operational pathways needed to meet the rapid growth of AI workloads.

Moderate to strong consensus on infrastructure integration and rapid‑deployment solutions, indicating that participants share a common vision for how the data‑center ecosystem should evolve to support AI. This alignment suggests that industry stakeholders are likely to collaborate on standardising modular designs and integrated power‑thermal architectures.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The discussion shows strong alignment among the speakers on the need for integrated, high‑density AI data‑center infrastructure and the value of modular, prefabricated construction. The only variation is in the specific Vertiv product highlighted (Smart Run vs OneCore). No substantive contradictions were identified.

Low – the participants largely concur on the challenges and objectives, with only minor differences in preferred implementation pathways, suggesting a cohesive industry stance on accelerating AI‑driven data‑center deployment.

Partial Agreements
Both speakers emphasize the importance of modular, prefabricated solutions to accelerate data‑center deployment, but they highlight different Vertiv offerings – Albertazzi focuses on the Smart Run approach that cuts build time by about 85% [91-97], while Speaker 3 describes the OneCore system that integrates power and thermal modules into a steel building shell [123-125]. The emphasis on distinct products shows agreement on the goal (rapid deployment) but differing views on the preferred solution.
Speakers: Giordano Albertazzi, Speaker 3
Vertiv Smart Run prefabrication reduces deployment time by ~85% (Giordano Albertazzi) Vertiv OneCore combines modular building blocks with a steel shell for fast builds (Speaker 3)
Takeaways
Key takeaways
AI workloads demand extreme physical infrastructure, with GPU densification driving rack power needs from ~20 kW to potentially 1 MW. Data‑center power architecture is shifting toward high‑voltage (800 V DC) systems and advanced cooling/thermal chains, including heat reuse. Modern data‑center design must be fully integrated and orchestrated; the compute unit is evolving from individual servers to pods and ultimately whole‑facility “computers.” Modular, prefabricated solutions (Vertiv Smart Run, Vertiv OneCore) can cut deployment time by up to ~85%, enabling faster scaling of AI‑focused facilities. India is positioned as a strategic global AI hub due to abundant power, favorable demographics, and Vertiv’s long‑standing presence; Vertiv plans to expand capacity and investment in the region. Partnerships with technology leaders such as NVIDIA are central to delivering reference designs that match AI application requirements.
Resolutions and action items
Vertiv will continue to invest in and expand its capacity in India to support AI data‑center growth. Vertiv will deepen its partnership with NVIDIA to develop and deliver AI‑optimized reference designs. Vertiv will promote and deploy its prefabricated, modular solutions (Smart Run, OneCore) to accelerate data‑center deployments.
Unresolved issues
Specific implementation plans for transitioning existing data‑centers to 800 V DC power architectures. Details on how heat‑reuse systems will be integrated at scale and their economic viability. Standardization and interoperability frameworks for fully orchestrated “body‑of‑AI” infrastructure across vendors. Quantitative forecasts for required power and cooling capacity as AI workloads continue to grow beyond current projections.
Suggested compromises
None identified
Thought Provoking Comments
There is an important, very important physical part of AI that sometimes is overlooked. It’s the power, cooling, and data‑center infrastructure that actually makes AI possible.
Shifts the conversation from AI as software to the often‑ignored hardware foundation, reminding the audience that AI’s capabilities are constrained by physical resources.
Redirects the focus of the session to infrastructure challenges, setting up subsequent discussion of power density, cooling, and modular solutions. It primes listeners to consider the broader ecosystem rather than just AI algorithms.
Speaker: Giordano Albertazzi
The DNA of a data center is changing: what used to be a rack with 10‑20 kW is rapidly becoming 30, 50, 150 kW per rack, and potentially up to a megawatt per rack.
Quantifies the exponential growth in power density, highlighting a concrete engineering problem that underpins AI scaling.
Leads to a deeper dive into power‑train evolution and the need for new architectures (e.g., 800‑V DC), influencing later remarks about future‑resilient infrastructure and prefabricated deployment.
Speaker: Giordano Albertazzi
Human intelligence happens in the brain, but the brain doesn’t survive without a body. Vertiv provides the ‘body’—the power and thermal infrastructure—that lets the AI ‘brain’ (the IT stack) function and produce intelligence.
Uses a vivid biological analogy to make the abstract relationship between compute and infrastructure intuitive, reinforcing the interdependence of hardware and AI.
Strengthens the narrative that infrastructure must be orchestrated as a single system, paving the way for the later emphasis on integrated, interoperable solutions.
Speaker: Giordano Albertazzi
The unit of compute is no longer the server; it’s the AI pod, and ultimately the entire data center operating as one single computer, potentially reaching gigawatt scales.
Reframes the scale at which compute is thought about, moving from individual servers to massive, unified facilities, which challenges traditional data‑center design paradigms.
Encourages the audience to think about modular, scalable designs and justifies the push for prefabricated, rapid‑deployment solutions discussed later.
Speaker: Giordano Albertazzi
Prefabricated solutions like VertiSmart Run can reduce time‑to‑deploy by almost 85%, an order of magnitude faster than traditional builds.
Provides a concrete metric that illustrates how new construction approaches can meet the urgent demand for AI‑ready infrastructure.
Introduces a tangible benefit that supports the argument for modular construction, influencing the subsequent speaker to elaborate on Vertiv OneCore as a hybrid solution.
Speaker: Giordano Albertazzi
India is an extremely promising market and hub for AI, not only because of power availability but also due to its demographics; Vertiv is committed to expanding capacity there.
Highlights geographic and strategic considerations, linking infrastructure capability to regional economic factors and positioning India as a focal point for future growth.
Broadens the discussion from technical challenges to market strategy, setting the stage for audience interest in regional deployment models and partnerships.
Speaker: Giordano Albertazzi
Vertiv OneCore combines the advantages of sequential traditional builds and prefabricated modular construction by inserting power and thermal building blocks into a steel shell, tested in a factory before on‑site assembly.
Introduces a hybrid construction model that directly addresses the earlier pain points of speed, quality, and scalability, offering a concrete solution to the problems raised.
Acts as a turning point that moves the conversation from problem description to a specific product offering, prompting listeners to consider implementation pathways and potentially shifting the tone toward actionable outcomes.
Speaker: Speaker 3
Overall Assessment

The discussion was driven forward by a series of insightful remarks that reframed AI from a purely software narrative to a hardware‑centric challenge. Giordano Albertazzi’s analogies, quantitative density figures, and emphasis on modular, rapid‑deployment infrastructure highlighted the urgency and complexity of scaling AI workloads. These points set the stage for Speaker 3’s introduction of Vertiv OneCore, which served as a pivotal moment by presenting a concrete, hybrid solution that directly addressed the earlier identified challenges. Collectively, the comments shifted the dialogue from abstract AI potential to tangible infrastructure strategies, deepening the technical depth and aligning the audience around actionable pathways for building the next generation of AI‑ready data centers, especially in high‑growth regions like India.

Follow-up Questions
What are the technical challenges and solutions for transitioning to 800‑volt DC power infrastructure in high‑density AI data centers?
Understanding DC conversion is critical for supporting future power densities and improving efficiency.
Speaker: Giordano Albertazzi
How can the heat generated by extremely dense AI workloads be effectively captured and reused?
Heat reuse can improve overall energy efficiency and sustainability of AI data centers.
Speaker: Giordano Albertazzi
What best practices enable prefabricated, modular data center construction that reduces deployment time by up to 85%?
Rapid deployment is essential to meet the fast‑growing demand for AI infrastructure.
Speaker: Giordano Albertazzi
How does the shift from server‑based compute to pod‑based and whole‑data‑center compute affect infrastructure design and management?
Redefining the unit of compute changes power, cooling, and networking requirements.
Speaker: Giordano Albertazzi
What specific reference designs are being co‑developed with NVIDIA for AI‑optimized data center deployments?
Reference designs can accelerate adoption and ensure optimal performance for AI workloads.
Speaker: Giordano Albertazzi
What is the projected demand for AI data‑center capacity in India, and how can Vertiv scale its offerings to meet that demand?
Accurate demand forecasting is needed to plan investments and capacity expansion in a key market.
Speaker: Giordano Albertazzi
What regulatory, grid‑capacity, and reliability challenges exist in India for supporting megawatt‑per‑rack power densities?
Addressing grid constraints is essential for deploying ultra‑dense AI racks safely.
Speaker: Giordano Albertazzi
How does Vertiv OneCore integrate with existing building shells versus new steel‑building shells, and what are the trade‑offs?
Clarifying integration options helps customers choose the most suitable deployment path.
Speaker: Speaker 3
What metrics should be used to assess the resilience and future‑proofing of AI‑focused data‑center infrastructure?
Standardized metrics enable objective evaluation of long‑term reliability.
Speaker: Giordano Albertazzi
What are the environmental and sustainability impacts of high‑density AI data centers, particularly regarding cooling and power consumption?
Sustainability considerations are increasingly important for large‑scale deployments.
Speaker: Giordano Albertazzi
How can industry standards ensure interoperability across power, cooling, UPS, and thermal‑chain components in AI data centers?
Interoperability reduces integration risk and simplifies lifecycle management.
Speaker: Giordano Albertazzi
What are the cost implications of adopting 800‑volt DC distribution compared with traditional AC systems in AI data centers?
Cost analysis is needed to justify capital expenditures for new power architectures.
Speaker: Giordano Albertazzi
How can Vertiv’s prefabricated solutions be customized for diverse geographic markets such as India?
Customization ensures solutions meet local regulatory, climate, and operational requirements.
Speaker: Giordano Albertazzi
What are Vertiv’s timelines and investment plans for expanding capacity and presence in the Indian market?
Clear timelines help partners and customers align their own rollout strategies.
Speaker: Giordano Albertazzi
What role will AI itself play in optimizing data‑center infrastructure management and operations?
AI‑driven management could improve efficiency, predictive maintenance, and resource allocation.
Speaker: Giordano Albertazzi
What are the primary failure modes and reliability concerns for ultra‑dense AI pods, and how can they be mitigated?
Identifying failure modes is essential for designing robust, high‑availability systems.
Speaker: Giordano Albertazzi
How does the Verti Smart Run solution achieve an 85% reduction in deployment time, and what are its key components?
Understanding the solution’s mechanisms can guide replication in other projects.
Speaker: Giordano Albertazzi
What challenges arise when integrating power and thermal infrastructure into a single modular block, and how are they addressed?
Integrated blocks promise speed but may introduce design and maintenance complexities.
Speaker: Speaker 3
What are the comparative advantages of liquid cooling versus traditional air cooling for AI workloads at extreme densities?
Cooling choice directly impacts performance, energy use, and rack density limits.
Speaker: Giordano Albertazzi
How can data‑center operators measure and improve the efficiency of heat extraction, rejection, and reuse processes?
Efficient thermal management reduces operational costs and environmental impact.
Speaker: Giordano Albertazzi
What future trends in AI workload density are expected, and how will they influence next‑generation data‑center design?
Anticipating workload growth guides long‑term infrastructure planning.
Speaker: Giordano Albertazzi

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

ElevenLabs Voice AI Session & NCRB/NPMFireside Chat

ElevenLabs Voice AI Session & NCRB/NPMFireside Chat

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session focused on overcoming India’s language accessibility challenge by creating a unified multilingual layer for the nation’s digital ecosystem [2][5-6][27-28]. Swati illustrated the problem with a farmer who had to travel 40 km to find help filling an English-only form, underscoring that 95 % of digital content is in English while 800 million users lack fluency [21-24][27-28]. Shailendra introduced the Bhashni (also referred to as Pashni) translation plugin, which is already deployed on more than 500 websites and leverages over 350 language models [7-9][19]. The plugin is presented as a lightweight, one-line JavaScript snippet that can be copied and pasted into any site, instantly rendering the entire site in all 22 Indian scheduled languages without backend redesign [37-45][46-49][67-71]. It automatically applies the multilingual feature across all pages, is DBM-compliant and framework-agnostic, and therefore requires no per-page integration [88-89][79-81][90-94].


Key technical capabilities include support for source languages other than English, a “skip-translation” class for elements like calendars or email IDs, and the ability to reorder language lists to prioritize regional languages [104-109][111-117][120-124]. Additional features allow URL redirection to language-specific domains, limiting the language dropdown to a subset, and preventing page reloads in portal forms, while dynamic content is batched to reduce API calls and latency [135-140][141-145][146-151][162-168]. The team emphasized the importance of custom glossaries to preserve domain-specific terminology, handle transliteration, and correct model-generated errors, noting that over 1.5 million glossary entries have been created for clients such as the Ministry of Home Affairs and BSF [190-197][215-226][236-242][246-250][276-283].


To date, more than 400 websites have integrated the plugin, generating over 24 million translation inferences and demonstrating the scalability of the solution [96-98]. The roadmap includes expanding support to 36 Indian languages and 35 international languages, automating glossary uploads, and adding a text-to-speech accessibility bar [190-194][195-197]. In the Q&A, an audience member asked whether the plugin could be used by private or commercial entities, to which Swati replied that separate collaboration agreements exist and private stakeholders can engage via the Bhashni Pavilion [304-306][307-310]. Another question about region-based default languages was met with acknowledgement that the use case is feasible and will be evaluated further [311-317][318-322]. Finally, the presenters affirmed that glossaries are customized per client, ingested into individual solutions, and that model fine-tuning is pursued for domain-specific accuracy, underscoring the ongoing commitment to digital inclusion [330-332][334-336][333-336].


Keypoints


Major discussion points


The pervasive language barrier in India and the need for a multilingual digital infrastructure – Swati opens by highlighting that “everything is available only in one language…Majorly English” despite a nation of “1.4 billion voices” [1-6]. Shailendra reinforces this by describing citizens who “are not being able to understand in English and Hindi” and the difficulty of accessing state-level policies in one’s own language [7-18]. A concrete example is given of a farmer forced to travel 40 km to fill an English form, illustrating the real-world impact of the divide [21-28].


Introduction and demonstration of the Bhashini (Bhajani) Translation Plugin as a lightweight, plug-and-play solution – Swati explains that the plugin “allows any website to be translated into multiple languages…with a one-liner, very lightweight, simple code” that requires only copy-and-paste and no backend overhaul [36-45][67-71]. She shows the integration on a demo site, noting that the same code works across all pages and is “DBM compliant and framework agnostic” [52-58][79-89][92-99].


Key technical features and challenges addressed by the plugin – The team discusses handling source languages other than English, skipping translation for specific elements (e.g., calendars, email IDs) via a “skip translation class” [104-110][111-117]; customizing language order and default language parameters for regional preferences [120-126][128-131]; managing portals without page reloads, dynamic content batching to reduce API calls, and URL redirection for language-specific domains [141-148][152-160][162-168][176-181].


The role of glossaries in improving translation quality and contextual relevance – Swati describes glossaries as essential for “post-translation” adjustments, handling domain-specific terminology, transliteration, and avoiding incorrect literal translations (e.g., “home” → “Ghar” vs. “Mukhya Prash”) [190-214][215-226][241-250][254-262][267-274]. She notes the creation of millions of glossary entries, the need for client-specific customization, and ongoing work to automate glossary ingestion and integrate accessibility features [196-199][330-336].


Audience questions on commercial use, regional default languages, and glossary maintenance – Participants ask whether the solution can be used by private entities, how default languages might be set per region, and how glossaries are maintained or used for model fine-tuning. Swati responds that commercial collaborations are possible via separate agreements, that regional default language changes are technically feasible, and that glossaries are customized per client and can inform fine-tuning pipelines [304-311][312-317][326-333][333-336].


Overall purpose / goal of the discussion


The session aims to present the Bhashini (Bhajani) Translation Plugin as a scalable, low-effort infrastructure for multilingual digital inclusion in India. It demonstrates how the tool can instantly translate existing websites into all 22 scheduled Indian languages, outlines its technical capabilities and real-world deployment experience, and engages stakeholders on practical concerns such as commercial adoption, regional customization, and the management of domain-specific glossaries.


Overall tone and its evolution


– The conversation begins with a problem-oriented, urgent tone, emphasizing the exclusion caused by language barriers.


– It shifts to an enthusiastic, solution-focused tone during the product overview and live demo, highlighting ease of integration and impact.


– The tone becomes technical and explanatory as the speakers delve into specific features, challenges, and implementation details.


– In the Q&A segment, the tone turns collaborative and supportive, addressing audience concerns, clarifying possibilities for private use, and inviting further engagement. Throughout, the tone remains constructive and optimistic about achieving digital inclusivity.


Speakers

Shailendra Pal Singh – Senior General Manager, Bhashani; co-presenter and technical expert on the Bhashini translation plugin and multilingual integration solutions [S2][S1].


Swati Sharma – Presenter and subject-matter expert on language accessibility, multilingual AI solutions and the Bhashini translation ecosystem [S4].


Audience – General audience participants; includes individuals such as Yuv (from Senegal) [S5], Professor Charu (Indian Institute of Public Administration) [S6], and Dr. Nazar (role not specified) [S7].


Additional speakers:


– None.


Full session reportComprehensive analysis and detailed insights

The session opened with Swati Sharma describing India’s stark language divide: a nation of 1.4 billion people and “1.4 billion voices” [1-4] yet the overwhelming majority of online content is offered only in English [5-6]. She quantified the gap, noting that more than 800 million Indians are not fluent in English and that roughly 95 % of digital material is English-only [5-6]. To illustrate the human impact, Swati recounted a farmer who travelled 40 km simply to find someone able to complete an English-language PM Kisan Samman Nidhi form [21-25][S1].


Shailendra Pal Singh then introduced the Bhashni Translation Plugin, the product that underpins the solution. In his remarks the plugin was also called Pashni and, on a few occasions, Bhajani[7-10]. He highlighted that the plugin is already deployed on more than 500 websites and is powered by over 350 language models [11-13], and positioned it within the National Language Translation Mission as a unified multilingual layer for India’s digital ecosystem [30-33].


A live demonstration followed. Swati showed that the plugin can be integrated into any site with a single, lightweight JavaScript one-liner that requires only copy-and-paste and no backend redesign [70-78]. Once the snippet is inserted, the code automatically enables translation of the entire site into all 22 Indian scheduled languages and persists across every page without the need for per-page integration [79-81].


Technical attributes emphasized during the demo included the plugin’s framework-agnostic design, its compliance with the Digital Brand Identity Management (DBM) guidelines for accessibility, and the fact that it operates without any backend overhaul [88-95][92-99].


The presenters then walked through a detailed feature set:


* Direct source-language translation – the plugin can translate from any source language without using English as an intermediary [104-110].


* Skip-translation class – developers can exclude specific elements (e.g., calendars, email addresses) from translation [111-117].


* Custom language ordering – regional languages can be placed first in the selector [120-126].


* Default-language parameter – a preferred default language (e.g., Hindi) can be forced regardless of the site’s original language [128-131].


* Bilingual-site handling – the plugin can detect and skip translation for pages already in a supported language [133-140].


* Limited dropdown – the language selector can be restricted to a subset of languages [141-145].


* Portal no-reload mode – on portal-style sites the plugin works without reloading the page, preserving user-entered data in forms [141-151].


* Mixed-language detection – the system automatically skips segments that are already in the target language [152-160].


* Dynamic-content batching – for high-frequency sites such as the State Bank of India and MyBharat Hotel, content is processed in batches to reduce API calls and stabilise response times [162-168].


* Voice-activated language selection – demonstrated on the Rail Madad site, users can switch languages via voice commands [170-174].


* URL redirection – the plugin can redirect users to language-specific domains [176-181].


A central component of the architecture is the glossary framework, which refines translation quality by incorporating domain-specific terminology, transliterations, and post-translation adjustments. Over 1.5 million glossary entries have been created for clients such as the Ministry of Home Affairs and the Border Security Force [190-226][236-242]. Specific examples included correcting model-generated punctuation errors (“SMT.”) [241-250], resolving hyphenation mismatches [254-262], fixing singular-plural inconsistencies [267-274], and disambiguating abbreviations such as “BN” for “battalion” in BSF documents [276-283]. Glossaries also enable custom translations for proper nouns, ensuring names like “Vakil Saab Bridge” retain their identity across languages [225-226]. The presenters warned that redundant or mismatched entries can degrade output and therefore must be curated carefully [295-299].


Impact metrics were presented: more than 400 websites have integrated the plugin, generating over 24 million translation inferences and creating 1.5 million glossary entries [96-98]. Real-world use cases highlighted include simplifying farmer access to government schemes [21-25], supporting bilingual portals for the Maharashtra Finance Department [133-140], handling high-frequency dynamic content for the State Bank of India and MyBharat Hotel [162-168], and enabling mixed-language detection that automatically skips already-translated segments [152-160].


The roadmap envisions expanding language support to 36 Indian languages and adding 35 international languages [190-194], launching an automated glossary-upload portal to streamline client onboarding, and introducing an accessibility bar with text-to-speech and screen-reader functionality [190-197]. These enhancements are framed as a “technology for dignity” that can reach the last mile of India’s digital population [198-200].


During the Q&A, audience members asked whether the solution could be used by private or commercial entities. Swati confirmed that separate collaboration agreements exist for startups and private organisations, with a dedicated stakeholder team available at the Bhashni Pavilion [??]. A query about region-based default language selection was met with an acknowledgement that there is no technical barrier and the use case will be examined further [??]. Questions on glossary maintenance and model fine-tuning were answered by explaining that glossaries are customised per client, ingested into individual solutions, and can inform fine-tuning pipelines after domain classification [??].


In conclusion, the presenters and audience agreed that India’s digital exclusion is fundamentally a language issue and that the Bhashni Translation Plugin offers a scalable, low-effort infrastructure to overcome it. The discussion progressed from an urgent problem statement to a live plug-and-play demonstration, then to a deep technical exposition, and finally to a collaborative dialogue on broader adoption and future enhancements, signalling strong alignment for continued development and deployment. [5-6][7-10][19][36-45][88-95][96-98][190-197]


Session transcriptComplete transcript of the session
Swati Sharma

accessibility, language accessibility and language inclusivity. We are a country of 1 .4 billion people. More importantly, a country of 1 .4 billion voices. We all think differently, we all speak differently, and we all dream differently. But whenever we go online, everything is available only in one language. Majorly English.

Shailendra Pal Singh

To break the language barrier that exists in our country. And we have different solutions and different integrations that we have. One of them is Pashni translation plugin, which is already sitting on top of more than 500 websites, if I’m not wrong, the exact number. And we are enabling people, we are enabling citizens of India who are essentially not being able to understand in English and Hindi because most of the digital content that you see, primarily the website, maximum you’ll see is a website which is sitting in a state. The default language would be there or English primarily. But then what about rest of the languages? Imagine a scenario that I’m someone from north and I’m living there in Maharashtra.

Mostly you will see the content in Marathi or English. But then what about having the same content? I don’t know English. But I really want to understand what is there. And I want to convert it, the different policies at the state level, different guidelines, different content, maybe creative content, etc. You need to know in my language. So, Bhajani Translation Plugin is one of the engineered solution using all the models that you might already be aware of. 350 plus models from our platform. We have this solution as Peksa Swati.

Swati Sharma

So, as Shailendra Pal mentioned, last year a farmer wanted to apply for the PM Kisan Samman Nidhi. It’s basically a very simple form that the farmer has to fill. But the form was in English. The farmer literally had to travel 40 kilometers only to find somebody who can actually help him out filling the form. This is the language divide. This is the barrier that we are trying to avoid. Eliminate. 800 plus million people. are not fluent in English. And 95 % of the content which is available, it is in English. This is where Bhashni comes into picture. The National Language Translation Mission of India. We are trying to transcend the language barrier. We are creating a unified multilingual layer for India’s digital ecosystem.

We are not just providing language as a feature. We are providing language as an infrastructure. We are encouraging language as the foundation for digital inclusion. Next slide, please. So, like sir introduced, the Bhashni Translation Plugin. It’s a powerful product through which you can have any website being translated into multiple languages, being accessible to all the people in the last mile. And this happens in matter of minutes. Not days. Or months. Or just minutes. This is the power of the product that we are talking about. And you don’t have to rebuild the entire website. You don’t have to redesign it. There is no back -end overhaul. Just one liner, very lightweight, simple code that you can copy and paste onto the website and you will have your website speaking multiple Indian languages.

This is how accessibility is made effortless, inclusion is made scalable, and the last mile reach is made real. So I just want anybody to see. Anybody who can copy and paste. Like we don’t need a developer or a person who knows JavaScript or the entire back -end. just somebody knows copy and paste and we’ll see how with the help of that you can have the entire website multilingual. So anybody who would like to do that? Yes, sir, please.

Shailendra Pal Singh

Maybe, you want to open a website first and show what exactly VashuCast is.

Swati Sharma

So this is the Vashni’s website and here is the plugin that has been integrated on the website. This plugin will help us have the entire website available in all 22 Indian languages. All right, so while we just give a quick glimpse of what Bhajani translation plugin is, it is basically a very lightweight utility, though. you find it very simple but the content that we have on this website primarily is in English and there are other challenges that you that we would like to discuss later on as how this translation plugin brings in though it looks very easy just you clicked on a button and then you do a translation all together but then we’ll discuss more about what are the different challenges we come across not from the fact the engineering side of it but on the language side of it how we cater and have this challenge taken care so this is just a plugin we just wanted to tell you this is how it works but you know if you go back to English then and then you know we will just talk about what you wanted to we’ll continue with that so I just wanted to have a quick demo of how you can integrate this plug -in onto the website so I think some if yes you can come we’ll just see how with the help of just the knowledge of copy and paste we can have the entire code implemented and you’ll have the entire website translated into multiple different languages.

For the purpose of this demo we had created this dummy website and the code for this website is here. So this is the code that none of us would most of us would not understand. And I would like to request sir to just copy and paste the plug -in code that we have. So we want to tell that this website content is only in English and you want to add multi -lingual flavor to it using Bhashini. You can integrate the solution that we have on the top of

Shailendra Pal Singh

That’s what you’ve meant.

Swati Sharma

Yes. So if you can sir just copy and paste this code. The code which is written here. Yes.

Shailendra Pal Singh

Anywhere here.

Swati Sharma

If you can just add a hyphen between translation and plug -in.

Shailendra Pal Singh

Yes.

Swati Sharma

Can you go back to the website? Refresh it. So you can see that the plug -in is added. And we can now have this website available in all 22 Indian Schedule languages. So that’s the power of this code. We’ve taken care of everything that is happening at the back -end. and you just have to copy and paste the code that we’ve created for you. It’s as simple as that.

Shailendra Pal Singh

So Swati, so let’s say I’ve embedded this particular thing on this particular website. Now it is available. There is the icon. What about if I go to next pages, right? Will the system understand that there’s a link in the I chose and I go to any page? It will reflect Hindi or I have to select every time I go to any page as my language, which I chosen as Hindi.

Swati Sharma

So you don’t have to apply this code on every page. The pages of the website will automatically understand that the multilingual feature has to be embedded on all the pages. So if you move on to any other page of the website. So this was just a dummy website that we had created. Let me. Go to Bajni translation plugin. in Bhajani’s website. So if you go to any of the pages, the plugin will remain there. And you will have the multilingual feature added on all the pages of the website, not just the home page. So let’s go back to the slides now. So like we just demonstrated, the code that we have for the plugin that we are talking about is a one -liner, very lightweight, simple integrated, simply integrated code, which you can use to have your website available in all 22 Indian Schedule languages.

It is DBM compliant and framework agnostic. So if you have your website, in different, made in different languages, it’s irrespective of that, the code will be applied to your website and you can use the same code.

Shailendra Pal Singh

So Swati, can you just give some light on what is DBM compliant as how the website is DBM compliant? If I, let’s say I have a government website and I want to include the Bhasini translation plugin onto it, what is this DBM compliant that you talked about?

Swati Sharma

So these are the compliances mentioned in the digital brand identity management compliance book that is available. So for everybody to have an accessible website, the DBM compliance have to be followed. And we have the DBM compliant code with us wherein all the accessibility features like, you know, that happens in the backend, you know, for, any person who is a visually special person who wants to access the website. is able to do that with the help of the technical integrities that we’ve incorporated into the plug -in code that we have. So this is a glimpse of the impact that we’ve already created. We have approximately more than 400 plus websites that are already integrated with Pashni translation plug -in.

From those websites, we get approximately 24 million plus inferences. And we’ve created 1 .5 million plus glossaries. So glossary is something that I will take at a later section during the session only. But just for a short description, glossary enhances the translation in such a manner that the end citizen who is actually consuming the content from the website is able to understand the content. And also, these are the 22 Indian scheduled languages in which the plug -in is available. available. Next slide, please. So while we were creating the plugin, we had to create something that, you know, one size fits all product and which is something very difficult to create because everybody has different requirements and to cater to all those requirements, we had to make one product that can simply be accessed by everybody.

So these are some of the use cases that I will be discussing that our plugin has the capability to resolve to. The first one is that generally what happens in, you know, a product like this, you translate, you know, from English to the target language. But here in our plugin, what we’ve done is that even if your website is, let’s say, created in a language other than English, let’s say Marathi. That can also be translated to the targeted language directly. So you don’t have to first translate the website to English and then move on to the targeted language. You can have the source and target language as per your requirement. So that’s how we’ve not, you know, you don’t have to get into the bridge of creating English as an intermediary to move from one language to another.

Next slide, please. Okay, so when I talk about a website, there are different sections of the website. And not all these sections would you want to translate. For example, the calendar, if there’s a calendar, you would not want it to be translated into, you know, the target language. Including email IDs and, you know, there are certain sections that a lot of people didn’t want to be translated. So there is one class that you can embed that is the skip translation class. Embedding that will help you. Navigate to the, navigate the sections that you don’t want to be translated. So, that’s also one feature that we have with our plugin. Next slide, please. Okay, so, you know, you saw the plugin, right?

There were languages listed in a certain manner in the plugin. So, what happens is at, you know, many regional places, we want the plugin to have the regional languages on top. So, for example, after English, people don’t want to go alphabetically like Assamese, Bengali. They would want their regional language. In this case, they wanted Hindi to come in the, you know, to change the order of the languages that are appearing. And that is also possible. So, if you want your regional language to come on top, you can have that with our plugin. So, you know, majorly what we say is that we would want to… We want to display our website in a certain language.

So for example, if you created the website in let’s say English, but you would want all the users to have the language to be displayed as Hindi first. And probably then they can navigate to their own targeted language. So even if your website, the source language of your website is English, you can, there is a possibility of adding the parameter which can have the source language as Hindi or Marathi or Punjabi as the user requires for all your websites. Next slide please. Okay, so what if your… Your website has the, you know, has been created in two languages. So for example, you’ve created your website in English and Marathi also. So that was the use case that we had with finance department Maharashtra.

So they did not want translation to happen in the Marathi language and the English language, though their source language of, you know, so basically the source language of the website was English and Marathi. So if you want to skip translation for different languages also, you can do that. So in this case, what happens is that the user selects a language. If the language of the source is selected, let’s say, you know, English or Marathi, it will go redirected to the English or the Marathi page of the website. And if the user has selected any other language, it will move on to the normal process of translating it into the target language. Next slide, please. So, you know, sometimes we have portals also.

Yes. So, you know, because we would want to have websites available in all 22 Indian scheduled languages so that we try and reach out to the maximum people. But if that is your use case wherein you would want just three or four languages to be displayed for every user to be seen, you can have that also. So the drop -down will only display four languages in that case? Yes. But it’s always encouraged to have all the languages so that everybody, you know, who’s accessing your website can have the website available. Thank you, then. So, talking about this use case, what happens is that in most of the cases, we also have portals. And in portals, we have forms or, you know, we basically ask input from the user who is using the portal.

So if they apply Bhajani translation plugin and they, you know, move on from one language to another, it will reload the entire page. If it reloads the entire page, whatever the user has filled in, like their details, their name, their email IDs, all that information was lost. So what we did to capture this was that now plugin can also have the portals without the reload picture. So if you don’t want the plugin to reload every time a user selects a language from the drop -down, you can have that. Next slide, please. So this was a very interesting use case. You know, you can see this is how the website was displayed. So the source language of the website is English.

But like we can see, after every English, below every English word. there is a different language. So Haryana written in English, then Haryana written in Hindi. Puducherry written in English and some other language. So here this was use case of handling mixed languages. So what we did here was that whenever the plugin sees that the source language of the plugin is different from what characters it is getting, like here in Haryana, it is getting Hindi characters also, it will skip this translation automatically. So you would not have to skip it at your end. We’ve done it and we’ve created it, we’ve designed the plugin in such a manner that if the source language of the website is, you know, if the contents going for the translation are different from the source language of the website, it will automatically skip the translation.

Next slide, please. So… With certain use cases, what happened was… that there was a lot of dynamic content on the website. So, static content can easily be translated. Like, it is also difficult, but it’s not as difficult as handling the dynamic content. But for certain, like for State Bank of India and for MyBharat Hotel, the dynamic content was changing so rapidly that it was making too many API calls and the response time was getting delayed. So, what we did there was that we intelligently had the code running in such a manner that the dynamic content was, the translation of dynamic content was handled in batches. And that’s how the, you know, API calls, the increased API calls reduced and the response time was stabilized.

Next slide, please. Okay, so now… We all can, you know, navigate to the website. select the target language on the website and have the website available in the target language. But what if somebody cannot navigate, cannot select a language from the drop -down? We also, with Rail Madad, you know, if you go to the Rail Madad’s website, there is a mic button. So you just say out your language. So for example, if you say out Gujarati, the entire website will turn into Gujarati. So that’s the capability of it. Next slide, please. Okay, so this is a very recent use case that we’ve handled. So like you can see here, there is the MSD website. And there is also another domain name, which is Hindi, which is in Hindi.

So what the client wanted was that, you know, once the user selects Hindi as the drop -down, the translation happens, but it also redirects to the… Hindi domain of the website. So that mapping of which language to which domain, that is also something that we have done at our end and you can have URL redirection also. Next slide please. Okay, so what happens, so let me just ask, I hope everybody here understands Hindi, right? What is the translation of home in Hindi? Ghar, Ghray, that’s right, right? But the home tab on the website, if it is getting translated to Ghar, it’s not the correct translation. It should be translated to Mukhya Prash. So these kind of use cases wherein the translation which is being given by the model is correct but you would want a specific different translation for a specific word or phrases that can also be handled through glossary.

So, the way that we have done this is that we have website. So, we have a lot of information in the information in the Just now, after we complete this, next slide please. So, this is the future roadmap for plugin that we have. We have expanded it to 36 languages, 36 more Indian languages. So, you can go to Vashni Pavilion which is right here in this hall only. We have a demo of the plugin which is available in 36 languages. We are also incorporating the 35 international languages. We have done that for certain use cases which are displayed here today at Bhaat Mandapam. Secondly, we will be talking about glossary but the glossary in, you know, traditionally the glossaries were sent to us through emails and there was a process to, you know, process the glossaries and then ingest it.

But now, we are also planning to get it automated wherein, you can just simply upload the glossary from your onboarding portal. and third, we are also adding the accessibility bar to the plugin. So if you want to have text -to -speech also integrated or screen reader also integrated with the plugin that we just showed, that is also something that we are going to do in some time. So technology for dignity, Bajni Translation plugin would help. It is a powerful tool that will empower you to actually disseminate whatever information you want to, to actually reach the last mile. Moving on to the next segment, which is the glossary. So, you know, we all of us here, we would have some application, some website developed for…

the ease of the user. We would want a person, a student who is registering for a form who can actually do we would want the person to do it in their own preferred language. We would want a farmer to listen to the schemes that are available for him in his preferred language. We would want an Angadwari worker to have the schemes that are available for her told to her in her own language. So that is all what we are working for. We are working for inclusivity and we are working for accessibility. Next slide please. So while we do that we also add Bhashni’s layer to all our solutions or websites to have the actual information reach the last mile.

But generally what happens is that you know we get a remark that the translation is not correct. It is wrong. And after doing analysis with most of our customers we realize that the audience, that the users who are trying to actually use our product, they are not looking for accurate translations. They are looking for understanding the content, the intent of the content which is there on the solution on the website that they have created. And this is not the result, you know, for this we don’t have to focus on getting the accuracy of the translation. We actually have to focus on the context of the translation, use case of the translation, domain of the translation.

So when we realize that, we understood the concept of glossary and that’s how glossary was formed. Next slide please. Now you all would be, you know, waiting for, to understand what glossary is all about. So glossary saw… It involves two kinds of use cases. One is the post -translation that I just told you before. That, you know, home being translated to ghar in Hindi is absolutely right. But home being translated to home tab being translated to ghar is probably not correct. So post translation wherein you would want home to appear as Mukhya Prasht on the tab, home tab, that is something that we cater through with glossary. The second use case is like in the example, there is a bridge called Vakil Saab Bridge in Gujarat.

So Vakil Saab Bridge if translated to English would become something like Lawyer Bridge or something. We wouldn’t want that. Vakil Saab Bridge is our coined terminology and we would want it to retain its identity. We would want Vakil Saab Bridge to be written as Vakil Saab Bridge only in English. And this is the use case of transliteration. So these two kind of use cases are solved through glossary. What we do is we create. We create these glossaries with our customers and we ingest it to the customer’s specific API. Next slide, please. So, you know, like I told you the meaning of glossary, all of us here have different glossaries. Like, you know, the science domain glossaries are different.

Gen Z has a different glossary altogether. You know, any region would have a specific kind of a glossary. So all of these glossaries have to be created with us. And, you know, the customers who created those glossaries have got the translation, which are accepted by the end user, which are understood by the end user. Like you can see, Ministry of Panchayati Raj gave us 15 lakh words of the Panchayat. Survey of India has given us 16 lakh words. So if we create glossaries together, we can have the translation barrier completely eliminated. So I will now walk you through certain. Use cases wherein we faced problems with our customers, but they were not. translation issues, they were actually issues that could have been easily resolved through glossary.

So if you can read this sentence here. So this use case, you know, this problem was reported to us by Ministry of Home Affairs where Honourable Home Minister Sir’s profile was not reflecting correctly. So this was the English sentence. Okay. And this was the translation that we were getting. So if anybody can tell me what is the problem here? So because of this full stop, Srimati full stop, SMT full stop, what the model thought was that the sentence has ended here. And that is why the formation of the sentence is entirely incorrect. But the solution was very simple. What we had to do was just add SMT dot to the glossary or just remove the dot from the SMT.

And we could have the correct output. So it’s as simple as that. It’s not the translations problem. It’s the understanding of glossary problem. Next slide, please. So, okay. Can anybody tell me what’s the difference between this and this puzzle? To these two puzzle pieces, what is the difference? Yes. So one of them has a hyphen and the other one does not have a hyphen. So when we received the glossary from MSME, there was a hyphen in between PMS and dashboard. but actually on the website it was displayed without the hyphen. So glossary is that sensitive. If you give me PMS hyphen dashboard, it will only recognize that and translate that. But if there is no hyphen, it will not recognize that and it will not give you the translated output which you have given us in glossary.

So that’s the, and again, here there was a singular and plural problem. So street vendors was mentioned in the glossary sheet that we received. But actually street vendor was mentioned on the website. So if there is a singular and plural difference in the glossary sheet that you are giving to us and what is actually reflected on the website or your solution, it will create a difference. So this is one thing that you can also do through glossary. So, you know, we received. A requirement wherein they wanted, you know, the animal husbandry department wanted that the entire sentence should not be translated. the, you know, abbreviations should be skipped from translation. So if you just give me this sentence and this sentence as glossary pairs in English and Hindi, this can easily be achieved.

Next slide, please. Okay, so in one of the glossaries that we received was authorized officer, so they wanted us to write authorized officer as newt adhikari. But actually, newt adhikari means appointed officer. So this is also something that we have to be careful of. Because, you know, for the end user, so there are two kind of users that we have in this case, the English users and the Hindi users. So the English user would read it as authorized officer, but since we have added glossary and changed it to, newt adhikari in Hindi, the Hindi user would read it, would understand it as appointed officer. So we have to be very careful while drafting the glossaries.

Next slide please. Okay. So if I can just ask what is the full form of BN? Normally what do we consider as the full form of BN? Billion, right? We would not consider BN as battalion. But in BSF’s case, this was a huge problem. So BN for BSF means battalion and not billion. The entire context changes. So for BSF, we have created glossaries for all the abbreviations. So it is always suggested that you know whatever abbreviations that you are displaying on your website or your solution, just give it to us as glossary so that the correct one can be displayed. So okay. So, So can you tell me if PS to Minister being translated as Maanenya Vastra Mantri Ji ki Iji Sacheev, is there a difference or like what would be the problem here?

Fine, let me tell you. So this is also correct, this is also correct. But this is not the actual translation of PS to Minister. If we want to have Maanenya written in the Hindi translation of it, we should always have it in the English version of it also. Glossaries are supposed to be equal. They are supposed to be equally weighted. You cannot expect the model to add or delete words as their own. So basically what we did here was we went back to the customer and said, that if you want to add Maanenya, you can add Maanenya to the text. at the output, please add respected or honourable in the input. Only then it will be balanced out.

Next slide, please. So this is one request from our end only. We receive a lot of glossaries that are redundant in nature for us. By that I mean that, you know, for example, we received employment and skill development as the glossary terminology and the translation that we are getting in Hindi. That was the actual output of the model also. So in this case, if you are giving us the glossary, which is actually the output of the model, you are only creating redundancy. So if you can just avoid that and give us translations, give us the post -translations or transliterations that are not recognized by the model, that would be handy. Next slide please. Next slide.

So in the end I would just like to say that language is not just words, it is identity. Let us prepare India’s languages for the future of AI and let us create glossaries, let us have multilingual AI in, multilingual layer in all your solutions so that actually the end user is benefited, actually there is digital inclusion, accessibility, inclusivity. Thank you. Any question?

Audience

Can you hear me? So I was saying like the translation thing which you were showing, is it, I know like this has been sponsored by the government and stuff, so can it be also used for commercial purpose like for private or private? Public entity? Can they also use that in their websites also?

Swati Sharma

So, you know, we have different kind of collaborations with us. So for that collaboration, there is a different agreement altogether that is being created. If you want to know more about it, just go to the Bhajani Pavilion. We have stakeholders who are handling the startups, the private organizations also, and they can help you there.

Audience

And one more thing which I wanted to know. So like you were showing for the websites, it was by default we can choose the default language, right? So can it be also extended? Like let’s say in some use cases, we could have someone who is logging in from Delhi. They would want to see it in Hindi and someone who is coming from Maharashtra. So can it change the default languages? Can it change from the region perspective?

Swati Sharma

So that’s an interesting use case. From what I’ve understood, you want different regions to have websites opened in different default languages. As per my knowledge, I don’t see a technical challenge to it. But again, we will have to look at the use case at our end and see if this can be deferred. It’s a very use case. This is a very interesting use case. We’ll look at it. Thank you.

Audience

Hi. Hi. So we all are aware that we have multilingual languages. And apparently, they have been trained on a lot of words also according to their domain knowledge. So if we have glossaries, how do we ensure that each and every glossary according to the domain is maintained and then trained or fine -tuned?

Swati Sharma

So glossaries are customized. For example, somebody from Ministry of Home Affairs would not want the glossary of, let’s say, CSI. Right. Right. you know the domains are different the contexts are different so glossaries are ingested only are customized and are ingested to the client itself they are not we have general glossaries also that can be applied to all but since glossary does not have the you know one glossary fits all type of a solution so we customize it for a client and then ingest it on to that client solution itself not to other clients or other environments.

Audience

Thanks So the glossaries you have do you have them do you use them to fine tune your models or is it just available as documents to infer while using?

Swati Sharma

So we do that we try to fine tune the models as well but we there are lot of things that we have to look at it look around while doing that because you know we have to classify them into different domains and then apply fine tuning models for the domain space. So it’s a long process, but we do that. Okay. Thank you. If there are any other questions, I will be available at the Bhashni Pavilion here also. And I would request everybody to please come visit us, explore our solutions, explore our services. And thank you so much for being a lovely audience. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (25)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“India has 1.4 billion people and the overwhelming majority of online content is offered only in English”

The knowledge base explicitly states that India has 1.4 billion people with diverse languages, but most online content is only available in English, confirming the claim [S1].

!
Correctionmedium

“Roughly 95 % of digital material is English‑only”

A source reports that 75 % of Internet content lacks language diversity, which differs from the 95 % figure cited in the report, indicating the claim may overstate the proportion [S27].

Additional Contextlow

“The plugin can translate the entire site into all 22 Indian scheduled languages”

India’s constitution recognises 22 scheduled languages, providing the linguistic scope the plugin aims to cover, but the source does not verify the plugin’s capability; it only confirms the number of languages [S15].

External Sources (61)
S1
ElevenLabs Voice AI Session & NCRB/NPMFireside Chat — -Shailendra Pal Singh: Role/title not explicitly mentioned, but appears to be a co-presenter/expert on Bhashini translat…
S2
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — -Shailendra Pal Singh- Senior General Manager, Bhashani
S3
https://dig.watch/event/india-ai-impact-summit-2026/digital-democracy-leveraging-the-bhashini-stack-in-the-parliamen — mostly from my understanding and experience with the English that has happened, in the past. Yeah. interesting points, P…
S4
ElevenLabs Voice AI Session & NCRB/NPMFireside Chat — -Swati Sharma: Role/title not explicitly mentioned, but appears to be a key presenter/expert on Bhashini translation sol…
S5
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S6
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S7
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S8
Criss-cross of digital margins for effective inclusion | IGF 2023 Town Hall #150 — Pavel Farhan:goal. Thank you. All right. Hi again, this is Pavel for The Record. I guess the benefit of going last is At…
S9
WS #144 Bridging the Digital Divide Language Inclusion As a Pillar — An audience member (Gabriel) raised practical implementation barriers, noting that font rendering and screen reader acce…
S10
Digital Inclusion Through a Multilingual Internet | IGF 2023 WS #297 — Additionally, the lack of devices or platforms that support specific languages can further hamper internet usage. Furthe…
S11
Digital inclusivity – Connecting the next billion — With over 90% of online content being exclusively in English, non-English speakers who depend on native language resourc…
S12
Main Session 1: Global Access, Global Progress: Managing the Challenges of Global Digital Adoption — Shivnath Thukra: Thanks to you and thanks for inviting me, Meta from India on this panel. I will, in the spirit of bein…
S13
https://dig.watch/event/india-ai-impact-summit-2026/elevenlabs-voice-ai-session-ncrb-npmfireside-chat — But like we can see, after every English, below every English word. there is a different language. So Haryana written in…
S14
Science AI & Innovation_ India–Japan Collaboration Showcase — Yeah, I think I think sort of agree to what everybody has talked about. I think with AI and the smartphone and we are on…
S15
Open Forum #36 Challenges & Opportunities for a Multilingual Internet — Pradeep Kumar Verma: I think I’m audible. So I will be presenting two case studies from India. So one is on the Bhasa…
S16
WSIS Action Line C2 Information and communication infrastructure — Aleksandra Jastrzebska: Thank you so much, Gonzalo. So good morning, everyone. I’m Aleksandra Jastrzemska, a recent grad…
S17
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — Himanshu Rai: Thank you very much. It’s always useful to be the last speaker because I can claim that I had the last wor…
S18
WS #119 AI for Multilingual Inclusion — Developers face technical challenges when accommodating non-Latin scripts in their systems. This includes issues with em…
S19
Safe and Responsible AI at Scale Practical Pathways — He notes that LLMs stumble on domain‑specific terms and suggests combining a glossary (or knowledge graph) with the mode…
S20
ElevenLabs Voice AI Session & NCRB/NPMFireside Chat — And one more thing which I wanted to know. So like you were showing for the websites, it was by default we can choose th…
S21
WS #179 Navigating Online Safety for Children and Youth — 3. Cultural Differences: The need for region-specific policies due to cultural variations was emphasised, complicating e…
S22
https://dig.watch/event/india-ai-impact-summit-2026/elevenlabs-voice-ai-session-ncrb-npmfireside-chat — From those websites, we get approximately 24 million plus inferences. And we’ve created 1 .5 million plus glossaries. So…
S23
OpenAI enhances model performance and customisation options — OpenAIhas unveilednew features to enhance model performance and customizability, catering to developers seeking to optim…
S24
Digital Inclusion Through a Multilingual Internet | IGF 2023 WS #297 — Additionally, community networks are emerging as a technological solution to provide connectivity even in remote areas. …
S25
Criss-cross of digital margins for effective inclusion | IGF 2023 Town Hall #150 — Pavel Farhan:goal. Thank you. All right. Hi again, this is Pavel for The Record. I guess the benefit of going last is At…
S26
Open Forum #29 Advancing Digital Inclusion Through Segmented Monitoring — Pria Chetty: For us, this work is core to our organization, and so we’ve been running for a number of years our after-ac…
S27
Digital divides & Inclusion — Collaboration could involve sharing best practices, providing technical assistance, and advocating for policies that pro…
S28
Science as a Growth Engine: Navigating the Funding and Translation Challenge — And so we actually see this manifest a lot because, you know, there’s been an explosion of drug discovery, and discovere…
S29
ITU’s Call for Input on WSIS+20 — Economic | Development Resource Mobilization and Funding Challenges The private sector derives significant benefits fr…
S30
Promoting policies that make digital trade work for all (OECD) — Lastly, the analysis highlights the importance of involving the private sector in policy decision making. It advocates f…
S31
Public-Private Partnerships in Online Content Moderation | IGF 2023 Open Forum #95 — Another key argument presented is the significance of having a legal framework in place to enable and support these part…
S32
Keynote-Alexandr Wang — “That’s transformative, perhaps most especially in countries like India, where so many languages are spoken.”[11]. “That…
S33
Leaders TalkX: Local to global: preserving culture and language in a digital era — Government-led national strategies are essential for language preservation Goyal presents India’s Bhasani program as a …
S34
ElevenLabs Voice AI Session & NCRB/NPMFireside Chat — The human impact of this divide was illustrated through a compelling anecdote about a farmer who needed to travel 40 kil…
S35
WSIS Action Line C8: Multilingualism in the Digital Age: Inclusive Strategies for a People-Centered Information Society — Tawfik Jelassi: Thank you, Davide. Ladies and gentlemen, colleagues, good morning to all of you and thank you for joinin…
S36
https://dig.watch/event/india-ai-impact-summit-2026/elevenlabs-voice-ai-session-ncrb-npmfireside-chat — But like we can see, after every English, below every English word. there is a different language. So Haryana written in…
S37
Open Forum #13 Bridging the Digital Divide Focus on the Global South — Tripti Sinha identifies language as a significant barrier to Internet participation, noting that millions of users still…
S38
Digital Inclusion Through a Multilingual Internet | IGF 2023 WS #297 — Audience:Thank you. I want to share a view from a regular user’s perspective. One of the main barriers is actually a lac…
S39
Safe and Responsible AI at Scale Practical Pathways — On contextualisation, Srivastava noted that while large language models are improving at general tasks, they consistentl…
S40
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — Leverage newly announced Indian sovereign language models as interim solutions while waiting for global companies to est…
S41
Lightning Talk #90 Tower of Babel Chaos — This brutally honest reaction captures the real human cost of language barriers – the physical and emotional stress of b…
S42
WS #225 Bridging the Connectivity Gap for Excluded Communities — The discussion maintained a professional but increasingly urgent tone throughout. It began optimistically with solution-…
S43
WS #144 Bridging the Digital Divide Language Inclusion As a Pillar — Christian Daswon: Thanks Ram. I’m really glad that Jen brought up cyber security. I think that’s a very important topic….
S44
Comprehensive Report: Cyber Fraud and Human Trafficking – A Global Crisis Requiring Multilateral Response — The tone began as deeply concerning and urgent, with speakers emphasizing the gravity and scale of the problem. However,…
S45
WS #53 Promoting Children’s Rights and Inclusion in the Digital Age — This comment sets an urgent and action-oriented tone for the discussion, emphasizing the critical nature of child online…
S46
AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence — The discussion maintained an optimistic and collaborative tone throughout, with speakers consistently emphasizing human …
S47
Lightning Talk #34 Digital Cooperation for Sustainable Heritage Preservation — The tone is consistently enthusiastic, informative, and solution-oriented throughout the presentation. The speaker maint…
S48
Fireside Conversation: 01 — The conversation maintained an optimistic and collaborative tone throughout, with both speakers expressing enthusiasm ab…
S49
WS #6 Bridging Digital Gaps in Agriculture & trade Transformation — The tone was largely optimistic and solution-oriented. Speakers were enthusiastic about the potential of the Internet Ba…
S50
Transforming Health Systems with AI From Lab to Last Mile — The discussion maintained a cautiously optimistic and collaborative tone throughout. It began with enthusiasm about AI’s…
S51
AI as critical infrastructure for continuity in public services — The discussion maintained a collaborative and constructive tone throughout, with participants building on each other’s p…
S52
WS #198 Advancing IoT Security, Quantum Encryption & RPKI — The tone was primarily informative and forward-looking, with speakers providing technical explanations as well as policy…
S53
Advancing Scientific AI with Safety Ethics and Responsibility — The discussion maintained a collaborative and constructive tone throughout, characterized by technical expertise and pol…
S54
Strengthen Digital Governance and International Cooperation to Build an Inclusive Digital Future — The discussion maintained a consistently collaborative and optimistic tone throughout, with speakers emphasizing partner…
S55
Discussion Report: AI Implementation and Global Accessibility — The tone was consistently optimistic and collaborative throughout the conversation. Both speakers maintained a construct…
S56
Open Forum #60 Cooperating for Digital Resilience and Prosperity — The discussion maintained a consistently collaborative and constructive tone throughout. It was professional yet engagin…
S57
Leaders TalkX: Building inclusive and knowledge-driven digital societies — The discussion maintained a professional and collaborative tone throughout, with speakers sharing both achievements and …
S58
Business Engagement Session: Sustainable Leadership in the Digital Age – Shaping the Future of Business — The discussion maintained a consistently collaborative and optimistic tone throughout. It began with academic framing bu…
S59
Advocacy to Action: Engaging Policymakers on Digital Rights | IGF 2023 — A large portion of population is digitally illiterate
S60
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-good-using-technology-to-create-real-world-impact — But I think open networks allows many actors, many innovators to build applications on the edge using AI. And I think we…
S61
How Multilingual AI Bridges the Gap to Inclusive Access — And I think this metric should be driven by what do we want it to be in the cultures and the regions to empower this. An…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Swati Sharma
32 arguments135 words per minute4990 words2203 seconds
Argument 1
Language barrier hampers citizens’ access to digital services
EXPLANATION
Swati points out that most online content in India is only available in English, which creates a barrier for citizens who do not understand that language. This limits their ability to use digital services effectively.
EVIDENCE
She notes that when people go online, everything is available only in one language, primarily English, highlighting the exclusivity of digital content [5-6].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The dominance of English online (over 90% of content) creates a disadvantage for non-English speakers, confirming the language barrier issue [S11]; additionally, lack of devices and digital literacy further hampers language-specific internet use [S10].
MAJOR DISCUSSION POINT
Language barrier hampers citizens’ access to digital services
AGREED WITH
Shailendra Pal Singh
Argument 2
Over 800 million Indians are not fluent in English; 95 % of online content is English
EXPLANATION
Swati emphasizes the scale of the problem by stating that more than 800 million Indians lack English proficiency, while the vast majority of digital content is in English. This underscores the need for multilingual solutions.
EVIDENCE
She states that 800 plus million people are not fluent in English and that 95 % of the available content is in English [27-28].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Studies show that more than 90% of online content is in English, underscoring the scale of the problem highlighted by the speaker [S11].
MAJOR DISCUSSION POINT
Over 800 million Indians are not fluent in English; 95 % of online content is English
Argument 3
Example of farmer unable to fill PM Kisan Samman Nidhi form due to English‑only interface
EXPLANATION
Swati illustrates the language barrier with a concrete case where a farmer had to travel 40 km to find help filling a government form that was only in English. The example shows real‑world impact on citizens.
EVIDENCE
She recounts that a farmer needed to travel 40 kilometers to find someone who could help him fill the PM Kisan Samman Nidhi form because the form was in English [21-25].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A farmer had to travel 40 km to find help because the PM Kisan Samman Nidhi form was only in English, illustrating the real-world impact of the language barrier [S1].
MAJOR DISCUSSION POINT
Example of farmer unable to fill PM Kisan Samman Nidhi form due to English‑only interface
Argument 4
Plugin enables instant multilingual translation of any website via a single lightweight code snippet
EXPLANATION
Swati describes the Bhashini Translation Plugin as a one‑liner that can be copied and pasted into any website, instantly providing multilingual support without extensive development effort. The solution is positioned as fast and effortless.
EVIDENCE
She explains that a single lightweight code snippet can be added to a website, making it multilingual in minutes without rebuilding or redesigning the site [41-45].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Bhashini Translation Plugin is described as a lightweight copy-paste code snippet that can make any website multilingual within minutes [S1].
MAJOR DISCUSSION POINT
Plugin enables instant multilingual translation of any website via a single lightweight code snippet
Argument 5
Works across all pages without needing per‑page integration
EXPLANATION
Swati clarifies that once the plugin code is embedded, every page of the website automatically inherits the multilingual capability, eliminating the need to add code to each individual page.
EVIDENCE
She demonstrates that the plugin persists across navigation and does not require per-page integration, as the pages automatically understand the multilingual feature [79-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Once embedded, the plugin automatically applies to every page of a site and retains language selection across navigation [S1].
MAJOR DISCUSSION POINT
Works across all pages without needing per‑page integration
Argument 6
Framework‑agnostic, DBM‑compliant, and requires no backend overhaul
EXPLANATION
Swati highlights that the plugin works with any web framework, complies with Digital Brand Management (DBM) standards, and does not require changes to the backend, making it easy to adopt for existing sites.
EVIDENCE
She notes that there is no backend overhaul needed and that the solution is framework-agnostic and DBM-compliant, with accessibility features built into the code [42-44][88-95].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The solution works with any web framework, complies with Digital Brand Management (DBM) standards, and does not require backend changes [S1].
MAJOR DISCUSSION POINT
Framework‑agnostic, DBM‑compliant, and requires no backend overhaul
AGREED WITH
Shailendra Pal Singh
Argument 7
Supports 22 Indian scheduled languages; uses 350+ language models
EXPLANATION
Shailendra mentions that the plugin leverages more than 350 language models and can render content in all 22 scheduled Indian languages, providing broad linguistic coverage.
EVIDENCE
He states that the solution uses 350 plus models from their platform [19] and Swati adds that the plugin makes a website available in all 22 Indian scheduled languages [53-54].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The platform utilizes over 350 language models to provide translation across all 22 scheduled Indian languages [S13].
MAJOR DISCUSSION POINT
Supports 22 Indian scheduled languages; uses 350+ language models
AGREED WITH
Shailendra Pal Singh
Argument 8
Demonstration of copy‑paste integration on a demo site
EXPLANATION
Swati walks the audience through a live demo where a simple copy‑paste of the plugin code adds multilingual capability to a dummy website, showing the ease of integration.
EVIDENCE
She shows the demo website, requests the copy-paste of the plugin code, and explains that the site’s content is only in English before integration [52-58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A live demonstration showed that copying and pasting the plugin code instantly added multilingual capability to a dummy website [S1].
MAJOR DISCUSSION POINT
Demonstration of copy‑paste integration on a demo site
Argument 9
Clarification that the plugin persists language choice across navigation
EXPLANATION
Swati confirms that after the plugin is added, the selected language remains active when users move to other pages, ensuring a seamless multilingual experience.
EVIDENCE
She asks to refresh the site and shows that the plugin remains, keeping the website available in all 22 languages across pages [67-71].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The plugin maintains the selected language when users move to other pages, ensuring a seamless experience [S1].
MAJOR DISCUSSION POINT
Clarification that the plugin persists language choice across navigation
Argument 10
Explanation of DBM compliance and accessibility features embedded in the code
EXPLANATION
Swati explains that DBM compliance ensures accessibility for visually impaired users and that the plugin incorporates technical features to meet these standards.
EVIDENCE
She describes DBM compliance as a set of accessibility features that enable visually impaired users to access the website, with the necessary technical integrity built into the plugin code [92-95].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The plugin is DBM-compliant and includes built-in accessibility features for visually impaired users; broader accessibility challenges are discussed in the literature [S1][S9].
MAJOR DISCUSSION POINT
Explanation of DBM compliance and accessibility features embedded in the code
Argument 11
Ability to translate from any source language directly, not only English → target
EXPLANATION
Swati notes that the plugin can translate from any source language (e.g., Marathi) directly to the target language, removing the need for English as an intermediate step.
EVIDENCE
She explains that the plugin can translate directly from a source language other than English, such as Marathi, to the desired target language without using English as a bridge [104-109].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The system can translate directly between Indian languages without using English as an intermediate language [S13].
MAJOR DISCUSSION POINT
Ability to translate from any source language directly, not only English → target
Argument 12
“Skip translation” class to exclude calendars, email IDs, etc., from translation
EXPLANATION
Swati introduces a CSS class that can be added to elements that should not be translated, such as calendars or email addresses, giving developers fine‑grained control.
EVIDENCE
She describes the “skip translation” class that can be embedded to prevent translation of specific sections like calendars and email IDs [111-117].
MAJOR DISCUSSION POINT
“Skip translation” class to exclude calendars, email IDs, etc., from translation
Argument 13
Custom language ordering to prioritize regional languages in the UI
EXPLANATION
Swati explains that the plugin allows the ordering of language options so that a regional language (e.g., Hindi) can appear at the top of the list, improving user experience.
EVIDENCE
She shows that the language list can be reordered so that regional languages appear first, such as moving Hindi to the top [119-125].
MAJOR DISCUSSION POINT
Custom language ordering to prioritize regional languages in the UI
Argument 14
Option to limit displayed languages to a subset (e.g., 3‑4)
EXPLANATION
Swati mentions that while the plugin can support all 22 languages, administrators can choose to display only a limited number of languages in the dropdown if desired.
EVIDENCE
She states that the dropdown can be configured to show only three or four languages, though displaying all languages is encouraged [141-145].
MAJOR DISCUSSION POINT
Option to limit displayed languages to a subset (e.g., 3‑4)
Argument 15
Portal handling without page reload to preserve user‑entered data
EXPLANATION
Swati describes an enhancement where language switching on portal forms does not cause a full page reload, thereby retaining any data the user has already entered.
EVIDENCE
She explains that the plugin can be configured to avoid page reloads when a user changes language, preventing loss of entered form data [146-151].
MAJOR DISCUSSION POINT
Portal handling without page reload to preserve user‑entered data
Argument 16
Automatic detection and skipping of mixed‑language content
EXPLANATION
Swati notes that the plugin can automatically detect when content contains characters from a language different from the source and skip translation for those parts, avoiding incorrect translations.
EVIDENCE
She provides a use case where mixed Hindi and English characters are present, and the plugin automatically skips translation of those segments [152-160].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The plugin automatically detects mixed-language segments (e.g., Hindi-English code-mix) and skips translation for those parts to avoid errors [S13].
MAJOR DISCUSSION POINT
Automatic detection and skipping of mixed‑language content
Argument 17
Batch processing for dynamic content to reduce API calls and latency
EXPLANATION
Swati explains that for sites with rapidly changing dynamic content, the plugin processes translations in batches, reducing the number of API calls and stabilizing response times.
EVIDENCE
She describes handling dynamic content for State Bank of India and MyBharat Hotel by batching translation requests, which lowered API calls and improved latency [162-168].
MAJOR DISCUSSION POINT
Batch processing for dynamic content to reduce API calls and latency
Argument 18
Voice‑activated language selection via microphone (e.g., Rail Madad)
EXPLANATION
Swati showcases a feature where users can speak the name of a language into a microphone button, and the entire website switches to that language instantly.
EVIDENCE
She demonstrates the mic button on the Rail Madad website that allows users to say a language (e.g., Gujarati) and have the site translate accordingly [169-174].
MAJOR DISCUSSION POINT
Voice‑activated language selection via microphone (e.g., Rail Madad)
Argument 19
URL redirection to language‑specific domains (e.g., MSD Hindi domain)
EXPLANATION
Swati describes a capability where selecting a language not only translates the page but also redirects the user to a domain dedicated to that language, ensuring consistent branding.
EVIDENCE
She explains that when a user selects Hindi on the MSD website, the plugin redirects to the Hindi domain, mapping language to domain [176-181].
MAJOR DISCUSSION POINT
URL redirection to language‑specific domains (e.g., MSD Hindi domain)
Argument 20
Glossaries customize translations, handle post‑translation fixes and transliterations
EXPLANATION
Swati outlines that glossaries are used to fine‑tune translations, correcting specific terms after translation and handling transliteration of proper nouns, ensuring contextual accuracy.
EVIDENCE
She explains that glossaries are used for post-translation adjustments and transliteration, such as keeping coined terms unchanged across languages [215-226].
MAJOR DISCUSSION POINT
Glossaries customize translations, handle post‑translation fixes and transliterations
Argument 21
Creation of over 1.5 million glossaries with clients
EXPLANATION
Swati mentions that more than 1.5 million glossary entries have been created in collaboration with various clients to improve translation quality across domains.
EVIDENCE
She states that they have created 1.5 million plus glossaries with customers [98-100].
MAJOR DISCUSSION POINT
Creation of over 1.5 million glossaries with clients
Argument 22
Examples of glossary impact: correcting punctuation‑induced errors, handling hyphenation, singular/plural mismatches, abbreviation meanings
EXPLANATION
Swati provides several real‑world examples where glossaries corrected translation errors caused by punctuation, hyphen usage, number agreement, and ambiguous abbreviations.
EVIDENCE
She illustrates a punctuation error corrected by adding SMT to the glossary [242-248]; a hyphen mismatch resolved by matching the glossary entry [256-260]; singular/plural differences fixed by aligning glossary terms [262-264]; and abbreviation meanings clarified for ‘BN’ in the BSF context [267-269].
MAJOR DISCUSSION POINT
Examples of glossary impact: correcting punctuation‑induced errors, handling hyphenation, singular/plural mismatches, abbreviation meanings
Argument 23
Emphasis on careful glossary curation to avoid semantic errors
EXPLANATION
Swati warns that incorrect glossary entries can lead to misleading translations, such as misinterpreting ‘authorized officer’ as ‘appointed officer’, and stresses the need for precise terminology.
EVIDENCE
She describes a case where translating ‘authorized officer’ to ‘newt adhikari’ changed the meaning to ‘appointed officer’, highlighting the risk of inaccurate glossaries [270-274].
MAJOR DISCUSSION POINT
Emphasis on careful glossary curation to avoid semantic errors
Argument 24
Glossaries are ingested per client; not shared across unrelated domains
EXPLANATION
Swati clarifies that each client receives a customized set of glossaries tailored to their domain, and these are not reused for other clients to maintain relevance and accuracy.
EVIDENCE
She explains that glossaries are customized per client and ingested only into that client’s solution, not shared across unrelated domains [330-336].
MAJOR DISCUSSION POINT
Glossaries are ingested per client; not shared across unrelated domains
AGREED WITH
Audience
Argument 25
>400 websites integrated, generating >24 million translation inferences
EXPLANATION
Swati shares deployment statistics, indicating that more than 400 websites have adopted the plugin and together have produced over 24 million translation inferences, demonstrating scale.
EVIDENCE
She reports that approximately 400 plus websites are integrated and have generated about 24 million plus inferences [96-98].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
More than 400 websites have adopted the plugin, collectively producing over 24 million translation inferences [S1].
MAJOR DISCUSSION POINT
>400 websites integrated, generating >24 million translation inferences
Argument 26
Farmer form example illustrating reduction of travel distance for assistance
EXPLANATION
Swati revisits the farmer scenario to show how multilingual translation can eliminate the need for farmers to travel long distances for help with government forms.
EVIDENCE
She recounts the farmer who had to travel 40 km to find assistance because the form was only in English, underscoring the benefit of translation [21-25].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The farmer’s 40 km journey to obtain help with an English-only form demonstrates how multilingual translation can eliminate such travel burdens [S1].
MAJOR DISCUSSION POINT
Farmer form example illustrating reduction of travel distance for assistance
Argument 27
Specific client implementations (Maharashtra Finance Dept., State Bank of India, MyBharat Hotel) demonstrating handling of dynamic content and portal forms
EXPLANATION
Swati cites concrete deployments where the plugin addressed challenges such as bilingual source sites, dynamic content, and portal form data retention, showcasing its versatility.
EVIDENCE
She describes the Maharashtra Finance Department use case where both English and Marathi sources are present and need selective translation [133-140]; and the dynamic-content handling for State Bank of India and MyBharat Hotel, where batch processing reduced API calls and latency [165-168].
MAJOR DISCUSSION POINT
Specific client implementations (Maharashtra Finance Dept., State Bank of India, MyBharat Hotel) demonstrating handling of dynamic content and portal forms
Argument 28
Expansion to 36 Indian languages and 35 international languages
EXPLANATION
Swati announces plans to broaden the plugin’s language coverage to include 36 Indian languages and 35 additional international languages, extending its global reach.
EVIDENCE
She states that the roadmap includes expanding to 36 Indian languages and adding 35 international languages [190-194].
MAJOR DISCUSSION POINT
Expansion to 36 Indian languages and 35 international languages
Argument 29
Automated glossary upload via onboarding portal
EXPLANATION
Swati mentions an upcoming feature that will let clients upload glossaries directly through an onboarding portal, streamlining the customization process.
EVIDENCE
She explains that the future roadmap includes automating glossary uploads via the onboarding portal [195-196].
MAJOR DISCUSSION POINT
Automated glossary upload via onboarding portal
Argument 30
Addition of an accessibility bar with text‑to‑speech and screen‑reader support
EXPLANATION
Swati outlines a planned accessibility enhancement that will embed a bar offering text‑to‑speech and screen‑reader capabilities, further improving inclusivity.
EVIDENCE
She notes that an accessibility bar with text-to-speech and screen-reader integration is part of the upcoming features [196-197].
MAJOR DISCUSSION POINT
Addition of an accessibility bar with text‑to‑speech and screen‑reader support
Argument 31
Plugin can be used by private and public entities under separate collaboration agreements
EXPLANATION
Swati clarifies that while the plugin is a government‑backed initiative, private sector entities can also adopt it through distinct collaboration agreements.
EVIDENCE
She states that different collaboration agreements exist for private and public entities and directs interested parties to the Bhashini Pavilion [307-310].
MAJOR DISCUSSION POINT
Plugin can be used by private and public entities under separate collaboration agreements
AGREED WITH
Audience
Argument 32
Availability of stakeholder team at Bhashini Pavilion for private‑sector onboarding
EXPLANATION
Swati points out that a dedicated stakeholder team is present at the Bhashini Pavilion to assist private organisations with onboarding and usage of the plugin.
EVIDENCE
She mentions that stakeholders handling startups and private organisations are available at the Bhashini Pavilion to help with onboarding [307-310].
MAJOR DISCUSSION POINT
Availability of stakeholder team at Bhashini Pavilion for private‑sector onboarding
S
Shailendra Pal Singh
2 arguments127 words per minute360 words169 seconds
Argument 1
Supports 22 Indian scheduled languages; uses 350+ language models
EXPLANATION
Shailendra highlights the technical breadth of the solution, noting that it leverages over 350 language models to provide translation across all 22 scheduled Indian languages.
EVIDENCE
He mentions that the solution uses 350 plus models from their platform [19] and that the plugin supports all 22 Indian scheduled languages [53-54].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The platform utilizes over 350 language models to provide translation across all 22 scheduled Indian languages [S13].
MAJOR DISCUSSION POINT
Supports 22 Indian scheduled languages; uses 350+ language models
AGREED WITH
Swati Sharma
Argument 2
Query about DBM compliance for government sites
EXPLANATION
Shailendra asks for clarification on how the plugin meets Digital Brand Management (DBM) compliance requirements, specifically for government websites.
EVIDENCE
He asks, “So Shati, can you just give some light on what is DBM compliant as how the website is DBM compliant?” [90-91].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The plugin’s DBM compliance and its relevance for government websites are explained in the product description [S1].
MAJOR DISCUSSION POINT
Query about DBM compliance for government sites
AGREED WITH
Swati Sharma
A
Audience
4 arguments144 words per minute220 words91 seconds
Argument 1
Question on using the plugin for commercial/private websites
EXPLANATION
An audience member inquires whether the government‑sponsored translation plugin can also be deployed on commercial or private sector websites.
EVIDENCE
The participant asks, “Can it be also used for commercial purpose like for private or public entity? Can they also use that in their websites?” [304-306].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The solution can be adopted by private sector entities under separate collaboration agreements, as noted in the presentation overview [S1].
MAJOR DISCUSSION POINT
Question on using the plugin for commercial/private websites
AGREED WITH
Swati Sharma
Argument 2
Inquiry about region‑based default language selection
EXPLANATION
Another audience member asks whether the plugin can automatically set default languages based on the visitor’s region, such as showing Hindi for Delhi users and Marathi for Maharashtra users.
EVIDENCE
The question outlines the desire for region-based default language changes: “Can it change the default languages? Can it change from the region perspective?” [311-317].
MAJOR DISCUSSION POINT
Inquiry about region‑based default language selection
Argument 3
Concern about maintaining domain‑specific glossaries and fine‑tuning models
EXPLANATION
An audience participant raises a technical concern about how glossaries for specific domains are maintained and whether they are used to fine‑tune the underlying AI models.
EVIDENCE
The participant asks, “How do we ensure that each glossary according to the domain is maintained and then trained or fine-tuned?” [326-329].
MAJOR DISCUSSION POINT
Concern about maintaining domain‑specific glossaries and fine‑tuning models
AGREED WITH
Swati Sharma
Argument 4
Response that glossaries are customized per client and fine‑tuning is performed but requires domain classification
EXPLANATION
Swati responds that glossaries are indeed customized for each client and that fine‑tuning of models is carried out after classifying content into domains, acknowledging the complexity of the process.
EVIDENCE
She explains that glossaries are customized per client and ingested into the client’s solution, and that they do fine-tune models after classifying domains [330-336].
MAJOR DISCUSSION POINT
Response that glossaries are customized per client and fine‑tuning is performed but requires domain classification
AGREED WITH
Swati Sharma
Agreements
Agreement Points
The language barrier hampers citizens’ access to digital services and must be broken.
Speakers: Swati Sharma, Shailendra Pal Singh
Language barrier hampers citizens’ access to digital services To break the language barrier that exists in our country.
Both speakers stress that most online content is only in English, creating a barrier for the majority of Indians, and that breaking this barrier is essential [5-6][21-25][7-8].
POLICY CONTEXT (KNOWLEDGE BASE)
This concern mirrors the digital inclusion agenda highlighted at IGF 2023, where empowering communities to control their languages was identified as key to overcoming language barriers [S24], and aligns with calls for region-specific policies to address cultural differences [S21].
The Bhashini translation plugin supports all 22 Indian scheduled languages using over 350 language models.
Speakers: Swati Sharma, Shailendra Pal Singh
Supports 22 Indian scheduled languages; uses 350+ language models Supports 22 Indian scheduled languages; uses 350+ language models
Both presenters state that the solution leverages more than 350 models to provide translation into all 22 scheduled Indian languages [19][53-54].
POLICY CONTEXT (KNOWLEDGE BASE)
The claim reflects India’s national Bhashini program, recognized as a large-scale effort supporting 22 regional languages and serving billions of users, underscoring government commitment to multilingual AI [S33]; it also fits within broader AI strategy recommendations for bold, consistent national policies [S32].
The plugin is lightweight, framework‑agnostic, DBM‑compliant and requires no backend overhaul.
Speakers: Swati Sharma, Shailendra Pal Singh
Framework‑agnostic, DBM‑compliant, and requires no backend overhaul Query about DBM compliance for government sites
Swati explains that the code is a one-liner, works with any framework, meets DBM accessibility standards and does not need backend changes, while Shailendra seeks clarification on DBM compliance, confirming its importance [42-44][88-95][90-91].
Both public and private entities can adopt the plugin under separate collaboration agreements.
Speakers: Swati Sharma, Audience
Plugin can be used by private and public entities under separate collaboration agreements Question on using the plugin for commercial/private websites
Swati notes that private and startup stakeholders are available for onboarding, and an audience member asks whether commercial use is allowed, confirming that the plugin can be used beyond government sites [307-310][304-306].
POLICY CONTEXT (KNOWLEDGE BASE)
Public-private collaboration models for digital services are advocated in IGF discussions, emphasizing the need for clear legal frameworks to enable joint adoption of tools [S31], and OECD policy notes stress involving the private sector in decision-making on digital inclusion [S30].
Glossaries are customized per client, ingested into their solutions, and are used for fine‑tuning models.
Speakers: Swati Sharma, Audience
Glossaries are ingested per client; not shared across unrelated domains Concern about maintaining domain‑specific glossaries and fine‑tuning models Response that glossaries are customized per client and fine‑tuning is performed but requires domain classification
Swati clarifies that glossaries are built for each client’s domain and can be used to fine-tune AI models after domain classification, addressing the audience’s technical concerns [330-336][326-329][333-336].
POLICY CONTEXT (KNOWLEDGE BASE)
The use of extensive, client-specific glossaries to enhance translation quality was highlighted in recent sessions reporting over 1.5 million glossaries and their role in fine-tuning AI models [S22]; similar customization capabilities are echoed in OpenAI’s fine-tuning API developments [S23].
Similar Viewpoints
Both presenters emphasize the need to eliminate the English‑only digital divide, highlight the plugin’s extensive multilingual capability (22 languages, 350+ models), and stress its technical openness (framework‑agnostic, DBM‑compliant, no backend changes) [5-6][7-8][19][53-54][42-44][88-95][90-91].
Speakers: Swati Sharma, Shailendra Pal Singh
Language barrier hampers citizens’ access to digital services To break the language barrier that exists in our country. Supports 22 Indian scheduled languages; uses 350+ language models Supports 22 Indian scheduled languages; uses 350+ language models Framework‑agnostic, DBM‑compliant, and requires no backend overhaul Query about DBM compliance for government sites
Unexpected Consensus
Region‑based default language selection can be implemented.
Speakers: Audience, Swati Sharma
Inquiry about region‑based default language selection So that’s an interesting use case. From what I’ve understood, you want different regions to have websites opened in different default languages. As per my knowledge, I don’t see a technical challenge to it.
An audience member asks whether the plugin can automatically set default languages based on the visitor’s region, and Swati confirms that it is technically feasible, showing an unexpected alignment on a nuanced feature request [311-317][318-325].
POLICY CONTEXT (KNOWLEDGE BASE)
Technical feasibility of region-based default language settings was demonstrated in an ElevenLabs session where users from Delhi automatically received Hindi interfaces [S20], reinforcing the policy push for region-specific language defaults [S21].
Private sector entities can adopt a government‑backed translation plugin.
Speakers: Audience, Swati Sharma
Question on using the plugin for commercial/private websites We have different kind of collaborations with us. … stakeholders who are handling the startups, the private organizations also, and they can help you there.
While the plugin originates from a national initiative, both the audience and Swati agree that private companies may use it under separate agreements, which may not have been anticipated given its public-sector framing [304-306][307-310].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions on digital inclusion encourage private sector uptake of government-backed solutions, citing benefits of collaborative frameworks and the need for private participation in multilingual service delivery [S30][S31].
Overall Assessment

The discussion shows strong convergence among speakers on the existence of a language barrier in India, the technical solution offered by the Bhashini translation plugin (multilingual support, lightweight integration, DBM compliance), and the openness of the solution to both public and private sectors. Additional consensus emerged on nuanced features such as region‑based default language settings and the customized use of glossaries for domain‑specific fine‑tuning.

High consensus – the participants largely agree on the problem definition, the adequacy of the proposed technology, and its broad applicability, indicating a solid shared understanding that can drive coordinated implementation across sectors.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The discussion was largely collaborative, with speakers presenting a unified vision of eliminating language barriers through the Bhashini Translation Plugin. Questions from the audience about commercial use, regional default language settings, and glossary maintenance were answered affirmatively, indicating consensus rather than conflict. No substantive disagreements emerged regarding goals, implementation strategies, or policy implications.

Minimal to none. The lack of overt disagreement suggests strong alignment among participants, which bodes well for coordinated rollout and adoption of multilingual digital infrastructure.

Takeaways
Key takeaways
India’s digital ecosystem suffers a massive language barrier, with over 800 million citizens not fluent in English and 95 % of online content in English. The Bhashini Translation Plugin provides instant, site‑wide multilingual translation via a single lightweight code snippet, requiring no backend overhaul. The plugin is framework‑agnostic, DBM‑compliant, and supports all 22 Indian scheduled languages (with plans to add 36 Indian and 35 international languages). Advanced features include direct source‑to‑target translation, skip‑translation classes, custom language ordering, limited language display, portal handling without page reload, automatic mixed‑language detection, batch processing for dynamic content, voice‑activated language selection, and URL redirection to language‑specific domains. A glossary system enables domain‑specific translation accuracy, handling post‑translation fixes, transliterations, and custom terminology; over 1.5 million glossary entries have been created for clients. Real‑world impact: >400 websites integrated, >24 million translation inferences, and concrete use cases such as simplifying farmer form access and handling dynamic content for State Bank of India and MyBharat Hotel. Future roadmap: expand language coverage, automate glossary uploads via an onboarding portal, and add an accessibility bar with text‑to‑speech and screen‑reader support. The plugin can be licensed to private and public entities under separate collaboration agreements, with a stakeholder team available at the Bhashini Pavilion.
Resolutions and action items
Demonstrated copy‑paste integration of the plugin on a demo website; confirmed that the same one‑liner works across all pages. Agreed to investigate and potentially implement region‑based default language selection for websites. Private‑sector organizations can obtain the plugin through a separate collaboration agreement; interested parties directed to the Bhashini Pavilion. Roadmap actions: add 36 Indian and 35 international languages, develop automated glossary upload, and integrate an accessibility bar with TTS/screen‑reader. Swati Sharma offered to be available at the Bhashini Pavilion for further discussions and onboarding.
Unresolved issues
Final decision and implementation timeline for region‑specific default language settings remain pending. Specific licensing terms, pricing, and rollout schedule for commercial/private‑sector use were not detailed. Exact process and timeline for automating glossary ingestion via the onboarding portal were not finalized. Details on the fine‑tuning workflow for domain‑specific models and how clients will manage ongoing glossary updates were not fully addressed.
Suggested compromises
For region‑based default language, Swati acknowledged no technical barrier but proposed a review of the use case before committing to implementation. Private‑sector usage will be accommodated through a separate agreement rather than the standard government framework, balancing open access with governance requirements.
Thought Provoking Comments
Follow-up Questions
Can the Bhashini translation plugin be used for commercial purposes by private or public (non‑government) entities?
Clarification is needed on licensing, agreements, and any restrictions for commercial use of the government‑backed translation solution.
Speaker: Audience (unnamed)
Can the default language of a website be automatically set based on the visitor’s region (e.g., Hindi for Delhi users, Marathi for Maharashtra users)?
Implementing region‑based language defaults would improve user experience, but requires technical feasibility and policy decisions.
Speaker: Audience (unnamed)
How can domain‑specific glossaries be consistently maintained, updated, and incorporated into model training or fine‑tuning?
Ensuring that each sector’s terminology stays current and is reflected in translation quality demands a systematic process for glossary management and model adaptation.
Speaker: Audience (unnamed)
Are glossaries used only at inference time, or are they also employed to fine‑tune the underlying translation models?
Understanding the role of glossaries in model improvement versus runtime substitution informs resource allocation and future development priorities.
Speaker: Audience (unnamed)
What are the technical and resource requirements to expand the plugin’s support from 22 to 36 Indian languages and to add 35 international languages?
Scaling to additional languages involves data collection, model training, evaluation, and possibly new UI/UX considerations.
Speaker: Swati Sharma
How can the glossary ingestion process be automated through an onboarding portal for clients?
Automating glossary upload would streamline deployments and reduce manual effort, but requires design of a secure, user‑friendly interface and backend processing pipeline.
Speaker: Swati Sharma
What is needed to integrate an accessibility bar (text‑to‑speech, screen‑reader support) into the translation plugin?
Adding built‑in accessibility features would broaden inclusivity, yet demands research into compatible APIs, performance impact, and compliance with accessibility standards.
Speaker: Swati Sharma
What are the best practices for efficiently translating dynamic content without overwhelming API calls or degrading response time?
Dynamic sites generate frequent content changes; optimizing batching, caching, and request throttling is essential for scalable real‑time translation.
Speaker: Swati Sharma
How can the plugin reliably detect and skip translation of mixed‑language or already‑translated segments within a page?
Accurate language detection in mixed content prevents over‑translation and preserves intended meaning, requiring advanced detection algorithms.
Speaker: Swati Sharma
What steps are required to ensure DBM (Digital Brand Management) compliance across diverse website frameworks when integrating the plugin?
Understanding and implementing DBM compliance is crucial for government portals and may involve additional validation, testing, and documentation.
Speaker: Shailendra Pal Singh

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Public Interest AI Catalytic Funding for Equitable Compute Access

Building Public Interest AI Catalytic Funding for Equitable Compute Access

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened by Deepali Khanna framed the emerging “compute divide” as the new digital divide that will shape who can lead AI development, noting that access to GPUs and cloud capacity is the main constraint on AI progress [4-7]. She highlighted India’s AI mission, which is deploying more than 38,000 public-sector GPUs to create a large-scale, sovereign yet open compute ecosystem for the Global South [14-18]. Khanna positioned philanthropy as a catalyst that can reduce risk, unlock capital and forge partnerships to accelerate democratization, outlining three discussion acts on what to democratize, how South-South partnerships and financing help, and concrete commitments [22-27][30-33].


Sushant Kumar announced the release of a report produced by the Democratizing AI Resources Working Group led by Dr Saurabh Garg, inviting feedback over the coming months [44-48][50-51]. Garg described the AI Summit’s guiding “sutras” of people, planet and progress and identified six foundational pillars-compute, capability, collaboration, connectivity, compliance and context-to guide a roadmap for equitable AI [56-68]. He emphasized that compute is today’s defining barrier and argued for shared, affordable infrastructure, capability diffusion through joint research, and robust yet flexible governance, proposing the open-source “Maitri” platform as a digital public good that countries can adapt [69-75][76-79]. Garg also warned that future model designs could reduce the current heavy compute demand, suggesting that smaller, domain-specific models might alleviate the energy-intensive barrier [84-89].


In response to a question about India’s governance model for public-interest compute, Garg said the focus should be on intelligent prioritization rather than rationing, and that philanthropic actors can help ensure affordable access [109-112]. Martin Tisné cautioned that building compute capacity alone can create “white-elephant” data centres if not paired with contextual data and open-source software, noting that most open-source funding comes from large corporations and that the ecosystem’s critical dependencies are under-resourced [125-138][139-140]. Vilas Dhar argued that sovereignty should be reframed from a territorial notion to an active, participatory model of AI diffusion, calling for new institutional intermediaries-such as those exemplified by Kalpa Impact-to connect talent, policy and capital for public-interest AI [163-186][187-191]. Dr Shikha Gitao presented an Africa compute-demand index, estimating a need for 2.5 million GPU-hours annually and highlighting the gap between demand and current supply, while stressing that investment readiness-including power, talent and use-cases-is essential for effective South-South collaboration [223-250][260-284]. Shaun Seow suggested that compute may be overrated compared with energy and application layers, pointed out latency and data-sovereignty limits to sharing compute across regions, and proposed aggregating demand to negotiate better cloud pricing and using philanthropy to subsidize costs and close skills gaps [310-333]. The panel concluded that future efforts must move beyond hardware to comprehensive public-interest frameworks covering models, data, talent and governance, as emphasized by Garg’s final call for broader systemic work [361-362].


Keypoints

Major discussion points


The emerging “compute divide” and the need to democratize AI infrastructure.


Deepali frames the problem as a shift from a digital to a compute divide, stressing that access to GPUs and cloud capacity will decide who shapes AI’s future [4-7][14-18]. Dr. Garg’s working-group identifies compute as the “defining barrier” and outlines six pillars (compute, capability, collaboration, connectivity, compliance, context) to guide a roadmap [68-70].


A shared, multi-stakeholder platform - Maitri – as a digital public good.


The group proposes a voluntary, modular platform (M-A-I-T-R-I) that countries can adopt and customize to expand shared access to compute, data, and governance [76-79].


Beyond hardware: data access, open-source ecosystems, talent and governance are equally critical.


Panelists warn that simply building data centres can create “white-elephant” resources if local data, open-source tooling, and skilled people are missing [128-133][148-152][274-283]. They call for robust, flexible governance and new funding models for open-source dependencies [132-138].


South-South partnerships and catalytic philanthropy as levers for equitable AI diffusion.


The Rockefeller Foundation positions philanthropy as a risk-reducer and capital-unlocker [23]; Dr. Garg stresses “intelligent prioritization” of compute for public-interest work, with philanthropy playing a key role [109-112]. Vilas and Martin discuss the need for new institutional intermediaries that can translate compute capacity into concrete outcomes for developing economies [155-166][184-191].


Practical metrics and institutional readiness: demand indexes, investment readiness, latency, and energy constraints.


Dr. Shikha introduces a “Compute Demand Index” and an “AI Investment Readiness Index” to quantify GPU-hour needs and the capacity of countries to use them [223-236][241-246]. Shaun highlights physical limits such as latency and energy that affect cross-border compute sharing [322-328], while later remarks stress the importance of aligning compute with local use-cases and building resilient, relational notions of sovereignty [340-354].


Overall purpose / goal of the discussion


The session moves the conversation from diagnosing the global compute gap to outlining actionable pathways: defining what AI resources must be democratized, exploring South-South collaborations and catalytic financing, and committing to concrete institutional mechanisms (e.g., the Maitri platform, demand/readiness indexes) that can be launched within the year [24-33][27-30][31-33].


Tone of the discussion


Opening: Optimistic and urgent, emphasizing AI’s transformative promise and India’s pioneering public-interest compute rollout.


Middle: Analytical and cautionary, with panelists highlighting gaps in data, open-source funding, talent, and the risk of “white-elephant” infrastructure.


Later: Collaborative and solution-focused, proposing concrete tools (Maitri, indexes) and calling for new institutional intermediaries and philanthropic catalysts.


Closing: Hopeful yet realistic, acknowledging the complexity of scaling compute while reaffirming commitment to equitable, public-interest AI outcomes.


Speakers

Deepali Khanna – Senior leader at the Rockefeller Foundation, focusing on AI democratization and public interest AI infrastructure. [S1]


Dr. Saurabh Garg – Secretary, Ministry of Statistics and Programme Implementation, Government of India; Chair of the Democratizing AI Resources Working Group. [S4]


Andrew Sweet – Vice President, Rockefeller Foundation; moderator for the panel discussion. [S7]


Shaun Seow – CEO, Philanthropy Asia Alliance; former senior roles at Temasek and CEO of Mediacorp. [S8]


Dr. Shikha Gitao – Founder and CEO, Kala AI (Kenya); leader in AI access and compute initiatives for Africa. [S9]


Sushant Kumar – Partner at Kalpa Impact, collaborator on the AI democratization report.


Vilas Dhar – President, Patrick J. McGovern Foundation; member of the UN Secretary-General’s High-Level Advisory Board on AI. [S13][S15]


Martin Tisné – Founder, Current AI; Public-Interest Envoy for France’s AI Action Summit. [S16]


Additional speakers:


Shri Abhishek Singh – Mentioned for leadership and partnership support (role not specified).


Charu – Acknowledged for extensive work in organizing the session (role not specified).


Dr. Sarabgarg – Referred to in the opening remarks; likely the same individual as Dr. Saurabh Garg (role not specified).


Velas – Participant referenced during the discussion on data stewardship (role not specified).


Anish – Member of the Kalpa Impact team (role not specified).


Jennifer – Member of the Kalpa Impact team (role not specified).


Full session reportComprehensive analysis and detailed insights

Opening Remarks – Deepali Khanna


Deepali Khanna opened the session by noting that the promise of artificial intelligence is now limited not by imagination but by a “compute divide” – unequal access to GPUs, cloud capacity and scalable infrastructure that will decide who gets to shape AI’s future [4-7]. She highlighted India’s AI mission, which is mobilising more than 38 000 public-sector GPUs to create one of the world’s most ambitious sovereign-yet-open compute ecosystems for the Global South [14-18]. Khanna framed the discussion in a three-act structure – defining what to democratize, exploring South-South partnerships and financing, and securing concrete commitments [22-27]. She thanked the leaders supporting the effort – Shri Abhishek Singh, Dr Saurabh Garg, Charu, Martin Tisné, Vilas Dhar, Sean, Dr Shikha Gitao, and the Kalpa Impact team [30-33].


Report Launch – Sushant Kumar


Sushant Kumar announced the release of the report “Opening up Computational Resources for New AI Futures”, produced by the Democratising AI Resources Working Group under Dr Saurabh Garg’s leadership [44-48][50-51]. He invited participants to provide feedback over the coming months [44-48].


Keynote – Dr Saurabh Garg


Dr Garg, chair of the working group, reminded the audience of the AI Summit’s three guiding “sutras” – people, planet and progress – and quoted the summit’s mandate: “AI must serve human welfare, advance sustainable development and enable shared prosperity” [60-62]. He noted that the summit convened seven working groups [60-62]. Garg outlined six foundational pillars for a collective roadmap: compute, capability, collaboration, connectivity, compliance and context [68-70]. He described compute as today’s defining barrier, with GPUs and high-performance clusters concentrated in a few regions, and argued that affordable, shared infrastructure is essential [69-71]. To address this, the group is prototyping Maitri – a non-binding, voluntary, modular digital public good that countries can adopt, customise and build upon, facilitating shared access to compute, data and governance [76-79]. Vishal Sikka warned that future model designs might shift from today’s energy-intensive large-scale architectures toward smaller, domain-specific models, likening the trade-off to “calorie vs. gigawatt” considerations [84-89]. When asked how India’s compute programme would be governed if treated as a public utility, Garg replied that the focus should be on “intelligent prioritisation” rather than strict rationing, positioning the platform as an enabling public-good that philanthropy can help fund to ensure affordable access for public-interest projects [109-112].


Panel Discussion


Moderator – Andrew Sweet


Martin Tisné (on moving from AI consumer to co-creator) warned that simply building compute capacity risks creating “white-elephant” data centres that sit idle without contextual data, language resources and open-source tooling [125-128]. He stressed that data innovation, especially privacy-preserving sharing mechanisms, lags far behind compute advances, and that most open-source funding comes from large corporations, leaving critical low-tier dependencies under-resourced [132-138]. Tisné concluded that effective AI diffusion requires coordinated attention to compute, data and open-source ecosystems [139-140].


Vilas Dhar reframed sovereignty, arguing that it should shift from a Westphalian, territorial model to an active, participatory approach that builds institutions capable of translating compute into locally relevant outcomes [163-166][170-176]. He likened the needed institutional framework to the Indian Premier League’s model of world-class, inclusive organisations, and called for new intermediaries-such as Culpa Impact-that can connect talent, policy and capital to public-interest AI [185-191][310-313]. Dhar warned against a “trickle-down” view of AI diffusion, advocating instead for interdependent, mutually beneficial partnerships that move beyond competition [350-354].


Dr Shikha Gitao presented a concrete “Compute Demand Index” for Africa, estimating a need for 2.5 million GPU-hours annually (rising to 7.5 million over three years) [223-226][243-246] and highlighting that the continent currently possesses only about 5 % of this capacity [250-254]. She introduced an “AI Investment Readiness Index” to assess whether countries have the power, talent, data and use-cases required to make compute effective [231-236]. Gitao argued that without clear use-cases-e.g., health, education or agriculture-donated GPUs remain idle, and she called for South-South collaborations where India could allocate specific GPU-hour blocks to African nations based on demand [292-298][300-304].


Shaun Seow, CEO of Philanthropy Asia Alliance, offered a contrasting perspective, suggesting that compute is “overrated” compared with energy and application layers [310-314]. He highlighted physical constraints such as latency of 50-100 ms over 10 000 km [322-328] and data-residency regulations that make direct cross-border compute sharing between, for example, India and Indonesia impractical. Seow proposed aggregating demand across countries to negotiate better cloud pricing [331-334] and using philanthropy to subsidise compute for startups and impact organisations, while also noting the urgent need to close the skills gap in Asia [335-339].


– In a later turn, Martin Tisné reflected on sovereignty, distinguishing traditional territorial control from “relational” sovereignty exemplified by indigenous data-ownership concepts, and advocated for a global, collaborative stack that balances control with agency [336-339].


Vilas Dhar responded by emphasizing the need for participatory institutions that foster interdependence rather than competition, and he called for concrete institutional building blocks within the next twelve months to link compute provision with talent development, data stewardship and policy [340-354].


Closing Remarks – Dr Saurabh Garg & Andrew Sweet


Dr Garg reiterated that democratization must extend beyond hardware to include models, data, talent and interoperable governance frameworks, urging the community to develop public-interest standards that address the full AI stack [361-362]. Andrew Sweet thanked the Indian government, Kalpa Impact and all panelists, announced that the report would be publicly available for comment until 31 March, and invited participants to continue the conversation beyond the summit [363-365].


Consensus & Divergences


Across the session, participants agreed that the compute divide is a critical barrier, that philanthropy can act as a catalyst, and that robust, flexible governance is essential for moving nations from AI consumers to co-creators. Disagreements emerged around the primary bottleneck-whether compute, data or broader investment readiness should be prioritised-and over the feasibility of cross-border compute sharing, with Shaun Seow highlighting technical and regulatory limits while Dr Gitao advocated for South-South GPU-hour allocations. The dialogue also revealed divergent views on sovereignty: Martin Tisné promoted relational, indigenous-data models, whereas Vilas Dhar called for new participatory institutions that transcend territorial notions. These nuanced debates underscore the need for hybrid approaches that combine shared infrastructure (e.g., Maitri), measurable demand and readiness metrics, targeted philanthropic financing, and innovative institutional intermediaries to achieve equitable, public-interest AI outcomes.


Session transcriptComplete transcript of the session
Deepali Khanna

to be with us, so thank you. We are here because we believe in AI’s transformative potential, and I’m certain you’ve heard a great deal about it over the past few days. Today, this session is about something deeper. The digital divide is rapidly becoming a compute divide. AI today is not constrained by imagination. It is constrained by infrastructure, by who has access to GPUs, to cloud capacity, to scalable compute. And that divide will determine who shapes the future of AI. Democratization in this context is not about catching up. It is about expanding who gets to lead. It is about ensuring that the next generation of AI breakthroughs are not concentrated in a handful of geographies, but are shaped by diverse talent, languages, and lived realities across the world.

And here, India is not waiting for permission. India is not waiting for permission. India is showing that it can be done differently. Through the India AI mission and through the compute capacity plan, mobilizing more than 38 ,000 GPUs as public infrastructure, India is building one of the most ambitious public interest compute ecosystems anywhere in the world. This is not incremental reform. This is infrastructure at scale. This is sovereign capability combined with openness. India is demonstrating that public interest AI infrastructure can be built in the Global South by the Global South and for the Global South. And this leadership matters because equitable access to compute is not just about hardware. It sits alongside access to data, open source models, talent, and institutional capacity.

India is proving that you can design AI ecosystems that are both globally competitive and globally competitive. And locally grounded. At the Rockefeller Foundation, we believe this moment requires moving from diagnosis to action. Philanthropy’s role is to be catalytic, to reduce risk, unlock capital, and convene unlikely partnerships that accelerate progress. Over the next hour, our discussion will unfold in three acts. First, what exactly are we democratizing? That’s an important question. Second, how do South -South partnerships and catalytic financing accelerate progress? And third, what concrete commitments can we land this year? If India’s example shows us anything, it is this. Democratization is not theoretical. It is operational. It is scalable. And it is already underway. The question now is how we accelerate it together.

Before we begin, let me take a moment to acknowledge a few leaders in the room. Shri Abhishek Singh who unfortunately has been pulled into another meeting but his leadership has been amazing his steadfast partnership and support has been something that I am extremely grateful for his vision of guiding this important work with clarity has been just spectacular Dr. Sarabgarg we are honored by your presence you have been in sessions since this morning and thank you for your leadership it’s truly a privilege to have you with us today my colleague Andrew Sweet who has joined us from across the world one of the sharpest lines of the Rockefeller Foundation and truly a force for good thank you for being with us today and supporting this conversation and of course I want to also thank Charu who has been working endlessly and very hard to kind of get us to this place thank you Charu for your leadership Martin, Vilas, Sean and Dr.

Shikoh thank you for lending yourself your voice and expertise to today’s discussion Your perspectives will help ground this dialogue in both ambition and action, and I know all of you are action -oriented folks, so we’re going to have something really cool come out from here. And last but certainly not least, our partners at Kalpa Impact, Sushant, Anish, Jennifer, thank you for being extraordinary collaborators and for helping shape today’s session. It is now my pleasure to hand it over to Dr. Gorf. Please, over to you, sir, or maybe I’ll hand it over. Okay.

Sushant Kumar

Thank you. Thank you, Deepali. When I mentioned the report, I fumbled the name, so I’ll go again. Opening up computational resources for new AI futures, new AI world is possible. And this is something that the team has worked really hard over the last few months. And today is an opportunity when we release a working version of that report and invite inputs, feedback, comments, and suggestions, which we will work through over the next few months. This research helped us think through and work with the Democratizing AI Resources Working Group under the leadership of Dr. Saurabh Garg. And he’s here. So it’s a pleasure and a privilege for us to invite him. And the other panelists to release this report.

Thank you. Thank you. Thank you. Thank you. opening up computational resources or in fact all resources that are necessary for development of AI in public interest and for real world impact. I could think of no better person than Dr. Saurav Garg under whose leadership I think we have come a long way in not just the intellectual thinking but as he will tell you in terms of operationalizing how we can bring this to life for billions in the global south and also the other countries in the world. Dr. Saurabh Garg, please for your keynote.

Dr. Saurabh Garg

Thank you and colleagues panelists great to be here and great to see a large kind of attendance that we have seen over the past few days in the AI summit and And there were seven working groups set up under the AI Summit umbrella. And one of them was on democratizing AI resources. I had the privilege to chair that group along with Kenya and Egypt. So I’ll obviously talk a bit on that. But before that, just to say that I think all of us are of the opinion that AI will definitely transform the world. I think the question is whether that transformation would be equitable, would be inclusive and aligned with public interest. And I think that’s really the issue which concerns a lot of people.

The AI Summit itself was built around three guiding sources. Sutras, the people, planet and progress. And therefore, the concept being that AI ultimately must serve human welfare. advance sustainable development and enable shared prosperity. I think these would be key background in the way these sutras were developed. And obviously, democratizing many of these resources would be key to that. During our working group discussions, we had the opportunity to talk to a large number of countries, people from academia, civil society, and other international organizations. And I think one consistent message was that most countries are not really seeking only access to AI, but also seeking agency in AI. And I think that’s key. And how the AI systems need to reflect each country’s own development priorities, languages, and social contexts.

From these discussions, there were six foundational pillars that we had to address. And we thought need to form the backbone of the collective roadmap for the future. computer capability collaboration connectivity compliance and context and I’ll just briefly speak on each one of these a bit compute no doubt is today’s defining barrier the access to GPUs accelerators high -performance clusters is a major issue for all AI ecosystems but the issue is how it can be made distributable affordable and reliable across and not concentrated in a few geographies and this would no doubt require us to look at whether compute can become a shared infrastructure in future or kind of a which supports public interest innovation and to the extent that we are focusing on innovation how that part can be a public interest infrastructure secondly infrastructure structure would not be sufficient there is a widening skills gap.

So how we can consider capability diffusion focusing on joint research, shared standards, open platforms and mutual learning. What needs to be done for this responsible deployment is so that we can link innovators to compute resources and citizens to trustworthy AI enabled services. Equally important would be governance. The governance framework needs to be robust enough to build trust, yet flexible enough to adapt to diverse social and cultural contexts. Open source and maybe modular AI stacks would help in enabling localization without creating dependency. So looking at some of these issues, on what mechanisms can be done to facilitate accessible and affordable computing resources by improving utilization rates and reducing transaction costs and also to lower barriers for access regardless of geography.

The working group looked at how this can be taken forward through a collaborative platform designed to expand shared access to compute data in partnerships. And the platform has been termed as Maitri, which is friendship in Hindi. Maitri, M -A -I -T -R -I, standing for Multi -Stakeholder AI for Trusted and Resilient Infrastructure, to be developed as a digital public good that countries can adopt, customize, and build upon. And obviously, it is a non -binding, voluntary, modular approach. depending on the context of each country, what kind of compute and what kind of methods can be used to have accessible, at least for innovators and researchers looking at data sets that can be put out, which are take care of the national laws and national protocols in place and look at models.

So what which are open source and which can be placed. So this this we envisage would help to at least ensure that portions of AI are a global public good, because we we we are focusing on innovation and research out here. And this would go beyond just a focus on hardware and platforms, but also in skills, institutions and governance capacity. I would just like to mention one area. The other is that the technology is a very important part of the development of the technology. but how perhaps it might proceed in future is also the fact that while infrastructure or compute seems to be the biggest constraint going forward as of now, that’s perhaps also based on the present models requiring large amounts of compute capacity and energy.

Going forward, would models retain this system of algorithms that they have, or would there be obviously small domain -specific niche models? I think yesterday there was a very nice remark made by Vishal Sikka, who mentioned that unlike when we talk of compute infrastructure, we are talking in terms of gigawatts, nothing less than that of whatever. But when you talk of a human being, you talk in terms of only 2 ,000 calories requiring a human being to sustain a computer. Which is not more than a 100 -watt bulb. for a day. So are we missing something out here and I think that’s a very important point that he made yesterday and that’s why the focus I think we need to have much more on the models and that itself might solve a lot of the areas that we are and when we’re talking of democratizing AI perhaps that’s the path forward.

So I’ll stop here and thank you all. Thank you for this opportunity.

Sushant Kumar

We now transition to the panel discussion and may I request Andrew Sweet, VP at the Rockefeller Foundation who is the moderator for the panel. Please join us here on the stage. May I request other panelists, Dr. Shikogitao, Martin Martin Tisney Vilas to join us on stage. Yes and Sean sorry Sorry Andrew over to you

Andrew Sweet

thank you Dr. Garg for those inspiring remarks and for the framing insight and perspective that you bring to this conversation all of the many conversations that you’ve had throughout the course of the week so we’re excited to continue and deepen the conversation today and very excited that we have five of the world’s brightest minds to discuss this topic these are all people that have been in the AI arena for decades, this is not new to them and all people that have deep regional expertise and global perspectives so very excited for this conversation today. We don’t have a lot of time we have about 25 or 30 minutes for the conversation so we’re going to dig in, we’re not going to have a number of speeches, Dr.

Garg’s speech will be the only speech that you’ve heard today but we’ll have a short series of provocations with actionable ideas for how we can move this agenda forward And so hopefully this conversation can be, you know, informal, back -and -forth banter. I think we’ll have one round of questions, but it would be great if we could kind of feed off of each other’s questions and energy because I know we all have a lot to say here on the panel and a lot of expertise to share. So I’ll briefly introduce the panelists, then we’ll dig in. You’ve already met Dr. Garg. He’s the Secretary of the Ministry of Statistics and Program Implementation for the Government of India.

He has been instrumental in shaping India’s AI governance and previously led the technology stack for the Transformative Adhar Initiative. We have Martin Tisné, founder of Current AI and public interest envoy for France’s AI Action Summit. Martin has spent 15 years building multi -stakeholder initiatives like the Open Government Partnership that we talked about earlier today to govern technology based on democratic values. We have Vilas Dhar , president of the Patrick J. McGovern Foundation. Vilas serves on the UN Secretary General’s High -Level Advisory Board on AI and leads one of the world’s largest philanthropic movements to AI for public purpose. my friend Dr. Shikoh Gitau founder and CEO of Kala AI a visionary from Kenya. She established Safaricom Alpha and has been a leading voice in ensuring that digital transformation in Africa solves real problems in education, healthcare and agriculture and finally, Shaun Seow CEO of Philanthropy Asia Alliance.

Sean is working to catalyze collaborative philanthropy across Asia, leveraging deep expertise from his time at Temasek and is CEO of Mediacorp so we’ll continue the conversation first question will go to Dr. Gerg India has launched the India AI mission with a target of 38 ,000 GPUs if we view compute as a public utility, much as we do with water and electricity what is the governance model that India is envisioning and should compute access be rationed or priced differently for public interest applications

Dr. Saurabh Garg

so I would say that the focus is not on rationing but on intelligent prioritization I think that’s going to be the focus, that the impute capacity is an enabling platform, and as I mentioned, as a digital public good, at least that’s where innovation and research is going. So that we focus, and I think that’s where a lot of the philanthropic organizations would have a large role to play, given that their focus is also on ensuring that AI benefits all. So with that focus in view, how governments, philanthropic organizations, and the private sector can collaborate to ensure that affordable compute capacities are accessible to all. I think that’s the models that we are looking at, and that will ensure experimentation going forward.

Andrew Sweet

Thanks, Dr. Garg. Martin, I’ll go over to you. Through current… AI and the Paris Charter, you’ve convened governments to discuss public interest AI. How do we move nations from being consumers to genuine co -creators? And quickly, you’ve also spoken about this looming data bottleneck. What do we do to unlock data sets for training without compromising privacy?

Martin Tisné

Okay, two big questions. Thank you. So, as you mentioned, we launched Current AI last year. We’ll be launching just this afternoon our first product, which is an open hardware product looking at linguistic diversity. I think I’ll be a little bit provocative to maybe start our session. I think compute is critical for obvious reasons. I think that from a financial, from an innovation, and from a sovereignty perspective, it is also possible to overplay it. I’ll tell you what my worry is, and I’d love to know what the panel thinks. I do have a worry that we could end up in a few years’ time in a world where we succeed in having compute capacity in inverted commas, in a number of countries, including in the global south, but where effectively the data centers are not used.

We’ve been talking to colleagues around the world. You do also have data centers that are effectively kind of white elephants and that are not used anywhere close to full capacity. And so I think for countries to be able to exercise sovereignty, they need to have contextual AI. They need to have contextual data in their languages with all of the diversity and the incredible richness that typifies their cultures available in order to create contextual localized AI that actually serves outcomes that people care about. And so while the compute piece is important, I think it’s one part of the issue. We need to talk about the data piece and we need to talk about the second part is the open source one.

So briefly, I think throughout the event, people talk about open source AI, that it’s a really good thing, that we’re all pro it. I think we also need to talk about how, from a philanthropic perspective, we resource the open source ecosystem. The reality of open source software is it’s mostly the top tier of open software is funded by large companies that are using it, right? Linux is partly funded effectively by volunteers working for SpaceX that are using it. There’s a bottom tier of dependencies in open source that are run on a shoestring, you know, by a few like critical, amazing people working overnight as volunteers. And there’s very few organizations, one of them, which is a part of the current AI roost, which looks at robust open source trust and safety, that are funding those critical dependencies.

So I think that for states across the world, in the global south and the north, to really be able to exercise sovereignty, and I’d love to talk about this a bit before, but I don’t want to hog the mic, we need to talk about compute, but also we need to be realistic about what the compute is going to be used for. So I think the data piece and the open source piece are really important. I think I’ve probably run out of time to talk about the data bottleneck.

Andrew Sweet

Go for it.

Martin Tisné

Well, so the number… There are people in the room I’ve worked with for a long time on this issue. Velas, you’re one of them. Sushant, you’re another. I won’t name check everyone. I think it’s fantastic that there’s been so much innovation in compute and we’ve seen such change over the past 10 years. In contrast, I think it’s a complete tragedy that we haven’t seen as much, anywhere near as much innovation when it comes to data and specifically the ability for people to be able to share personal data in ways that both respect privacy and contribute to outcomes. And that’s effectively it. I think we need a huge amount more resources and thought, both when it comes to the technical side of the issue, and here, other than the side, I think that partly it’s solved, but enterprise users of AI have access to these kind of technical safeguards in a way that private users don’t.

And there’s a story that we can talk about if we have time. And then on the governance side, so for example, Velas, you and I have talked for a long time about different, and now there’s different forms of data stewardships, whether data trusts or others. To the day… I haven’t seen one that really scales to the level that we would want to see it scale. to and that I think we need a lot more resources, a lot more thinking there’s been work done but if we could harness even 20 % of the sort of like brain capacity of the world that’s going into compute right now I think we would be in a very different place. Thank you.

Andrew Sweet

Excellent, thanks Martin. Actually Vilas I’ll go next to you because I think this reminds me a little bit about a recent article you wrote about the Indian Premier League as a model for how India builds world class institutions I re -read it this week in preparation for this conversation is there a similar IPL playbook for public interest compute or is the window for building these public institutions closing as commercial consolidation accelerates?

Vilas Dhar

Well I can’t think of a more controversial topic to spend our time here in this conversation than cricket. It’s been a good week all around but I think many of the people in this room probably know. Before I start I just want to say Dr. Garg I want to acknowledge in particular your leadership on this work. I spend a lot of time with senior decision makers across governments and the conversations that we have had have really given me great hope for the combination of technological confidence but also an understanding of what this means across an ecosystem. And so I want to acknowledge your leadership in particular. Thank you. Look, this question around the IPL I think is great, right?

I mean, let’s not torture the analogy and take something really fun and then try to, like, tie it to AI. But here’s what I’ll say about it. I think in many ways what we need is a new institutional framework that goes from the elites who are participating in their own places to something that feels deeply participatory. And I think around compute infrastructure in particular, we are stuck in a model where we keep reengaging and renovating old concepts and try to describe a new world. I will tell you sovereignty has been the buzzy word of the moment, right? Everybody wants to talk about sovereignty and diffusion. Sovereignty as a Westphalian concept that goes back a few centuries tries to take the idea that ownership of pieces of silicon somehow magically results in outcomes and impact that transform lives.

Now, there are logical links. And, of course, there are codependents. competencies, but to simply say that we will site compute in a particular geography and so figure out a way to disconnect ourselves from the interdependence of the 21st century doesn’t really bring us to a good outcome. I’ll tell you the second part of this, AI diffusion. If you haven’t heard this already, every tech CEO here, this has been the buzzword of the moment. I spent some time yesterday with the prime minister and a number of tech CEOs who wanted to talk about their investments in India. Those investments in many ways followed the playbook of the PR press release. They were, we’re going to build a new data center, we’re going to invest in a new compute capacity.

But when you dig deep and you ask the next question, who will this really benefit? What value does this create for public impact and outcome? How does putting a large number of servers in a particular place result in that community finding an economic uplift, a benefit in economic opportunity, a sense of dignity? The conversation sometimes falls. flat. So AI diffusion to me in its core concept, the idea that you hyper concentrate technological capacity, compute data, and somehow the rest of the society benefits sounds a little too much like something that as an American I know too intimately as trickle -down economics. The idea that if we made the rich as rich as possible somehow the benefits would filter down to everybody else and it would work.

AI diffusion is a passive concept. It starts on the premise that we build technological capacity for a few and somehow it works out for everybody else. But there’s an alternate model and it ties directly to this report that’s been issued today and the work that we’ve been talking about. For AI to benefit everyone requires a direct and active impact. It requires us to step in and say what are the institutions we have to build that actually physically and metaphorically transform the idea of compute infrastructure to be something that everybody can use. It requires us to build the institutional layers and the capacity that lets a community that’s trying to solve a local problem know that compute isn’t the thing that holds them back.

rather the conceptualization of the problem the aggregation of the full stack of resource sets as Martin described that include compute that include data governance mechanisms that include the political agency of communities to participate and let us then turn that into that final app solution infrastructural development that actually leads to the outcome we’re solving for in many ways I think this is the great role of the institutions that are represented here on the stage and in this room for philanthropies to transform the capital landscape in a way that says great entrepreneurs and leaders like my dear friend Chico here and so many here in India that are building open source public access AI stacks don’t have to worry about the resource constraints of the private capital markets that they know that they can access governmental and substantive structural resources that let them build the tools that they want to and know that they have equitable access to the markets as well as a matter both of policy and as a product that they can go out and get to consumers and creators that they can provide a service that lets them people use it at scale.

And the last part of this, and I have to say this, is this doesn’t happen, as we’ve discussed in the private market, but it also doesn’t happen exclusively by going to frontline nonprofits and saying, now you’re supposed to be the builders. It requires us to innovate a new institutional set of intermediaries. I think of groups like Culpa Impact, which I think is an incredible example of a combination of technical sophistication, policy impact, support for government, that actually sits at the layer that connects these different elements and lets us build on top of it. I think this is the work that’s ahead. If we really think about pragmatic outcomes to this conversation, Andrew, I think one of the questions we might ask is, what are the institutions we need to build in the next 12 months that connect the dots around all of these different pieces and support this transformation at scale?

Andrew Sweet

Dr. Shikha, I came across a recent article that you put out there saying that for the West, AI is a matter of efficiency, but for you, it’s a matter of life or death. You’ve been a champion for AI access. You were very active in this summit. You were very active in the Kigali Summit. We were together at the launch of the first ever AI factory for Africa together in April in Kigali. You’ve also talked about global tech companies. If they want African data, they should provide compute infrastructure in return. How do we formalize these reciprocal agreements, and what does a true India -Africa partnership look like that doesn’t just reciprocate global North -South models, similar to what Vilas was just talking about?

Dr. Shikha Gitao

Thank you very much for having me. It’s always fun to listen to everyone here on this. I was hoping somebody was going to preempt some of the work that I was going to talk about, but lucky for me, I have some stuff to talk about. Thank you, Vilas. So when we talk about compute, it’s this amorphous thing. In fact, we launched an AI research lab in Nairobi, and we have some GPUs there. And one of the key things was like a demo showing up what a GPU is. And our PS was like, oh, my God, this is what a GPU is, because he’s never seen a GPU. And then I made sure, like, every time I’m speaking, I’m asking, how many of you actually have seen a GPU, not on the Internet, touched one?

Maybe five people. And this is everywhere. Every single room that we’re talking about compute, we ask the same question, have you ever seen a GPU? And so right now, five to ten people. So it’s this thing that people talk about. We need GPUs, we need compute, we need all of these things. And for us, it is very important, as an African continent, we had, like, our research. Colo came for the Global South a few days ago. Same question, how many of you have seen a GPU? about 10 people had never touched a GPU. How many of you need compute? Everybody raises their hand. But what does that actually mean? In fact, one of the panels, the starting point was like, when it comes to compute, we all need Jesus.

And I thought, how do we quantify this? So we, and I think we have already spoken to Calpa about this. We’re working, I think, on the same time about a framework. So we just released a compute demand index. Because we realized every time we speak about compute, people have ideas, they have thoughts, they have proposals. They don’t have the numbers to say that. We need GPUs. How many? We need megawatts. And the gigawatt, megawatt conversation, what does a gigawatt of compute actually mean? So we went ahead and said, for Africa, we need to, every time we’re having conversations with these governments, this is actually what you need. But you actually need to put money into it.

So we, our first index was, did it demand? And the second one is, is your country ready for this? which we are calling AI Investment Readiness Index. So I’ll give you some numbers. Africa needs 2 .5 million hours of GPU hours a year, 7 .5 million for the next three years to be able to start computing well. This is for training as well as research. That is something that I can work with. So when I come to India and say I need 2 .5 million GPU hours a year, how many of them can you give me? And we have this conversation with the UNDP in Italy, and they said, oh, we have 1 .5 million GPU hours that we can donate. We have 1 .1 million more to go.

Cassava is saying we are putting 2 ,000 GPUs. How many hours of GPUs with those GPUs, hours, not actual physical, how many hours of GPUs without those 2 ,000 GPUs actually provide for the continent? So we need to be. We have to start being very practical rather than being arbitrary on what we. want. Of these 7 million GPU hours we need in the next three years, Africa only has 5 % of that. So we are doing the math. We only have 125 ,000 of these GPU hours a year, which is like times three of that for the next three years. So you’re solving, when I go to villas, I’m saying I need these GPU hours. It’s very practical as I’ll say I can be able to do half a million GPU hours.

So it’s not just going with an arbitrary number. I need GPU hours to be able to put this. And for us, that is important. But for me, it is the conversation about investment, and that’s the conversation that we asked. How do we have this South -South collaboration? How does we have this India collaboration? How does it actually look like? There’s the paradox. Everybody, as he said, as Martin said, everybody wants sovereignty. Everyone wants to talk about diffusion of AI. But what does that actually mean? Do we actually need it? So I’ll give you two examples of my two favorite countries. Hopefully none of them is here, actually in Nigeria. So there’s something you’re calling the Nigerian paradox.

Nigeria is the number one country in the computer demand index. Why? Nigeria is doing very well. It’s 110 million Internet users, a huge population. They’re doing very well in terms of, like, e -commerce, financial services. So they’re up there when it comes to why they need compute. And we’ve seen this in India. India is very high there as well when you’re doing the same exact thing. But what about investment readiness? And investment readiness is are they able or capable of running a compute facility? Do they have power? Do they have the talent? And I love, I think, what Mateen and the minister spoke about, what we think about. When you think about compute, you don’t think about just GPUs.

It’s a whole stack of things. It’s talent. It’s governance. It’s all these things. And you’re thinking about. When you think about investment readiness when it comes to compute, you have to look about all these things. Because I can give you GPUs, as he said, and I’ve worked in digital transformation for the last 20 years, and I work for the AFDB as a digital transformation lead, and we’ll buy computers and go three years later, and they have never been powered at all. And that’s the case that is going to be with GPUs, because you’re going to give countries these GPUs. If they don’t have talent, they don’t have the power to run it, they don’t have the data sets, they don’t have models, they don’t have use cases to build on top of it, you’re wasting that money.

And that’s where the investment readiness comes in. So we’re talking to countries, and we’ve had this conversation with African countries, is there’s no point of investing all your dollars in putting a compute facility. Get your talent ready. Get your data sets ready. Have strong use cases that people can back. Then, with all of that, can we then define what are the demands? Can we make money? you need. You don’t need a gigawatt Kenya, you do not need a gigawatt of compute to be able to run. Maybe you need a 200 megawatt facility and that’s where we want. So coming back to the question, how do we interact with India? This is our demand. Burundi might need 50 megawatts of GPU.

Can India be able to facilitate that? But it’s not just about facilitating the GPU, it’s what is the GPU in service of solving for health, education, agriculture. And when you have clear use cases, then the GPU demand becomes an obvious ask. And I think bridging that and convincing governments especially of bridging that gap is what we need to be able to do. And then the governance framework actually comes to play. Thank you. I know that’s a lot.

Andrew Sweet

That’s great. Thank you. Thank you, Dr. Shikha. We’ll go through Sean, and then I want to keep it kind of informal for the remaining ten minutes after Sean speaks, so any reactions to any of the comments, and then we can do a lightning round if we have time, but if we don’t have time, that’s fine as well. Sean, over to you. The Philanthropy Asia Alliance brings together 80 members and partners to address Asia’s interconnected challenges through collaborative philanthropy. Is there an opportunity for Asia’s philanthropic networks to coordinate shared compute and infrastructure, pulling resources from places like India, Indonesia, and other nations, rather than competing, and what would unlock that collaboration?

Shaun Seow

Thanks, Andrew. The advantage of coming last is that I could say I agree with all of them. Actually, I’m going to add to the much maligned word called compute. Maybe we could end the panel right away. I’m going to join Martin in actually agreeing that compute is actually a bit overrated. The ownership… of compute… So when you think about the stack, I’m going to add another way to frame the conversation, Jensen Huang’s AI stack. When you think about energy, hardware, compute models and applications and the top layer applications is really what will drive and value capture for the economics as well as the impact, social impact. Really the stumbling block is probably energy at a bottom level.

And thankfully for many countries in Asia, the costs have been driven down because of the abundance of hydro, solar and wind. Then when you think about the next layer of hardware, I mean that’s obviously dominated by China, Chinese and American players. And when you think about the compute level, I understand why we fuss over compute because the Americans own 75 % of the GPU cluster performance. The Chinese 15%, Europeans maybe about 4%. and the rest of us are only like 0 .1%. I think even India is just 1 % of that. But I think the issue is actually deeper than just the ownership. I mean, if you think about what it needs to get the work done, it’s more access. So the question you’ve posed me about sharing of compute, for example, between Indonesia and India, I live in Southeast Asia, and that’s why Indonesia is like a couple of hours away from where I live.

And we know the situation in Indonesia quite intimately. There’s data residency requirements, and that’s why there’s a build -out of data centers. Think also of the physical limitations of actually the latency of sharing compute between India and Indonesia. For example, 10 ,000 kilometers apart, when you think about the latency of what, 50 to 100 milliseconds, it’s just not going to work for the sharing of compute between Indonesia. and India. Attractive as the idea is, it doesn’t work. I think they’re just physical limitations, data sovereignty, privacy issues that prevent that from happening. So I just want to look at the positive side of what’s happening. When you think about the cost of compute is coming down, when you think about the emergence of new clouds, GPU for a service, I think these developments are actually going to be good for the unleashing of AI, for social impact, for economic capture.

So the way you can think about it is, how do we then make it a little bit more accessible for startups, for impact organizations? Maybe the way to think about it is really, how do you think about aggregating demand so that you can actually negotiate with the new cloud providers and get a cheaper pricing? How do you then think about philanthropy coming in? And to subsidize some of the compute costs. and I think I kind of agree more with the observation that you really need to go beyond just infrastructure you need to think about the ecosystem you’re building I think the skills gap in Asia is actually huge and that could be really what’s stopping us from really optimizing maximizing the power of AI in what we want to do Is that too long?

Andrew Sweet

No, that’s perfect I’m not sure if anybody wants to react to any of that Martin, I see you scribbling furiously maybe first reaction to you

Martin Tisné

No, I am scribbling. I’m scribbling because I’m thinking about your points I’m thinking about the points of the panel and I’m thinking about the term sovereignty So my scribbles are to your point about the Westphalian concept of sovereignty that’s about the ability to make law within your territory and it’s a very global north concept and it’s a notion of territory which has physicality and I’m just what I was scribbling was the physicality of the territory, it’s like we’re very focused on the physicality as you were saying on the GPUs of the bricks and mortar, so we’re going to be okay because we’re going to be sovereign on this data centre, the data centre it’s on my territory and what got me thinking is other concepts of sovereignty such as when I was spending a lot of work working on data calamities and data stewardship, thinking about indigenous data sovereignty which is a different type of concept it’s a more relational concept than a territorial concept, right?

It’s about a pre -existing an inherent authority a relational authority over that which makes up a people, and so when we were studying for example indigenous data sovereignty in the Maori context in New Zealand the Maori community any data that in any way involves Maori, the Maori community … legacy is part of the Maori community so I think that there’s something here in thinking about a very in some ways a quite rigid approach to sovereignty which is about control as mentioned and one which is more about agency and which is more relational so that’s what the panel has got me thinking and I’ve been doing some writing and thinking with colleagues and friends around the notion not of a sort of like a controlled national stack but a global open resilient collaborative stack and that’s not one at all and just I’ll finish with that, that doesn’t mean that like all the data is open and anything goes and anyone can extract your personal data and you’re back in a sort of Zuboff you know surveillance capitalism world it’s one where it’s a question of choice and agency and the what you wish to exercise authority over and how.

That’s my scribbles. Thank you.

Vilas Dhar

As you can tell, when we get on a panel with people you love and respect, the conversation just flows. So I want to build off this point and a little bit of what you said, Chico. I want to take a different tack to this question of agency, which is if I had asked any development leader in the world 10 years ago, if you could have your dream of an extra gigawatt of energy capacity in your country, what would you do with it? I can’t imagine that any of them would have said, well, I want to use it to run a bunch of computation on things that may or may not have short -term economic value for my country.

Andrew, your organization has been incredible around the world at building capacity and grids in power production, in ensuring that people can use power for development. And yet somehow, I think for many of us, we are surrounded by conversations where now the question has become, how many megawatts and gigawatts can you put into compute for AI? It is a fundamental challenge when you think about what are our priorities in development. Again, going back to core principles of human rights and dignity and participation in the world, to say governments who have limited capacity should now all of a sudden be focused on this topic. It brings us again from this question and this shift. In many ways, I acknowledge that the traditional conversation around compute is one of breaking over -dependence on the American AI stack, on other international players that are coming in.

But the response to over -dependence isn’t internal dependence, it’s interdependence. It’s saying if there are places that have incredible capacity and even potential to drive, as Shiko said, the availability of compute hours, how do we build interconnectedness that lets that be a mutual value exchange? Not merely, again, clients who have to go to another country and say, please give or let us buy compute, but rather the products of that compute are going to be to build the infrastructure that you can then use in your own country. To allow for centers of excellence that allow for local capacity and local competence to drive what gets built and to allow that to be the new tokens of international trade in a way that leads to a much more connected and shared prosperity rather than descending back to that 200 -year -old concept of how do we make sure that we’re competitive in an adversarial frame.

I recognize that what I’ve just shared with you is maybe not where the dominant private sector conversation is. And to those who would oppose it, the primary critique is, well, that sounds quite naive. And yet we’ve seen it happen. We’re seeing it in the few areas of hope in the multilateral system where we’re actually finding that technology governance is something that brings everybody to the table, that lets people engage in meaningful shared outcomes. We’re seeing the seeds of it. The question is whether we’re going to let them sort of die out in the sun or if we’re actually going to water them, invest in them, in order to grow.

Andrew Sweet

Great. Dr. Garg, any final insights?

Dr. Saurabh Garg

I know there’s a little time, but just one thing I would say that perhaps we need to spend a bit more time going forward on the frameworks that will help ensure public interest frameworks looking at things beyond compute, looking at models, looking at talent, looking at data, how that can be shared and interoperable and in a manner which takes care of public interest. So I’ll just stop it out there.

Andrew Sweet

Well, thank you. Thank you to the Indian government. thanks to our partners at CalPA for putting this together especially for the authors this is now officially out there I think you have until March 31st to read the copies are available out there you have until March 31st to review the document and submit your reactions thank you to the panelists, really appreciate it enjoy the rest of the summit, thank you I think the NDIA team wants to hand over some souvenirs from the panel Thank you Thank you. Thank you. you you Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (30)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“India’s AI mission is mobilising more than 38,000 public‑sector GPUs to create one of the world’s most ambitious sovereign‑yet‑open compute ecosystems for the Global South.”

The knowledge base states that India is building one of the world’s most ambitious public-interest compute ecosystems with 38,000 GPUs as public infrastructure, confirming the reported figure [S1] and the similar description in [S14].

Confirmedmedium

“The AI Summit’s three guiding “sutras” – people, planet and progress – and the mandate that “AI must serve human welfare, advance sustainable development and enable shared prosperity”.”

Multiple sources record the summit’s three sutras (people, planet, progress) and the associated mandate, matching the report’s wording [S109] and further echoed in [S110] and [S111].

Additional Contextmedium

“India’s AI mission is mobilising more than 38,000 public‑sector GPUs to create a sovereign‑yet‑open compute ecosystem.”

While the current deployment is around 38,000 GPUs, the knowledge base notes that India plans to expand its public-infrastructure to 50-60,000 GPUs, providing additional context on the programme’s scaling trajectory [S48].

External Sources (115)
S2
Shaping the Future AI Strategies for Jobs and Economic Development — – Dipali Khanna- Kip Wainscott – Parag Khanna- Narendra Singh
S3
WS #2 Bridging Gaps: AI & Ethics in Combating NCII Abuse — David Wright: Thank you both. Yeah, amazing kind of explanation from the two people leading this. Thank you. Next, we’re…
S4
The Foundation of AI Democratizing Compute Data Infrastructure — -Saurabh Garg: Secretary in the Ministry of Statistics and Program Implementation in the Government of India
S5
https://dig.watch/event/india-ai-impact-summit-2026/the-foundation-of-ai-democratizing-compute-data-infrastructure — And they could be partly technological and partly policy -based or protocol -based. And a combination of this will ensur…
S6
Democratizing AI Building Trustworthy Systems for Everyone — – Dr. Saurabh Garg- Natasha Crampton – Dr. Saurabh Garg- Natasha Crampton- Justin Carsten
S7
Building Public Interest AI Catalytic Funding for Equitable Compute Access — -Andrew Sweet- VP at the Rockefeller Foundation, served as moderator for the panel discussion
S8
Building Public Interest AI Catalytic Funding for Equitable Compute Access — -Shaun Seow- CEO of Philanthropy Asia Alliance, working to catalyze collaborative philanthropy across Asia, has expertis…
S9
Building Public Interest AI Catalytic Funding for Equitable Compute Access — – Shaun Seow- Dr. Shikha Gitao – Vilas Dhar- Dr. Saurabh Garg- Dr. Shikha Gitao
S10
Webinar – session 1 — Dr. Gitao’s forum delved into the multifaceted role of the internet within modern society, underscoring its key contribu…
S11
Inclusive AI_ Why Linguistic Diversity Matters — -Sushant Kumar- Session moderator/host
S12
Building Public Interest AI Catalytic Funding for Equitable Compute Access — – Dr. Shikha Gitao- Andrew Sweet- Sushant Kumar
S13
Panel Discussion AI & Cybersecurity _ India AI Impact Summit — -Moderator- Session moderator (role/title not specified) -Vilas Dhar- President, Patrick J. McGowan Foundation
S14
https://dig.watch/event/india-ai-impact-summit-2026/building-public-interest-ai-catalytic-funding-for-equitable-compute-access — We have Vilas Dhar , president of the Patrick J. McGovern Foundation. Vilas serves on the UN Secretary General’s High -L…
S15
A Digital Future for All (afternoon sessions) — – Vilas Dhar – President and Trustee, Patrick J. McGovern Foundation Vilas Dhar: I mean, we assume that inertia is the…
S16
Building Public Interest AI Catalytic Funding for Equitable Compute Access — – Martin Tisné- Vilas Dhar – Martin Tisné- Vilas Dhar- Dr. Shikha Gitao- Dr. Saurabh Garg
S17
Inclusive AI_ Why Linguistic Diversity Matters — – Ayah Bdeir- Martin Tisne
S18
State of Play: AI Governance / DAVOS 2025 — Abdullah AlSwaha: I cannot stress on this enough. And let me draw another parallel for you. If you deprive a person o…
S19
IGF 2024 Opening Ceremony — Abdullah bin Amer Alswaha: I would like to devote my speech on, first of all, making sure, on a multilateral perspectiv…
S20
https://dig.watch/event/india-ai-impact-summit-2026/shaping-the-future-ai-strategies-for-jobs-and-economic-development — Governments willing to move decisively, private sector actors willing to collaborate, technologists willing to design fo…
S21
From principles to practice: Governing advanced AI in action — This comment was insightful because it identified a critical gap in AI governance: the lack of systematic follow-up and …
S22
Green and digital transitions: towards a sustainable future | IGF 2023 WS #147 — In terms of governance, a framework is deemed essential to operationalise long-term systems for the service of citizens….
S23
MahaAI Building Safe Secure & Smart Governance — AI does not recognize borders. We need interoperable frameworks, shared safety standards, and cooperative oversight mech…
S24
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion — Either you regulate or you innovate. Let’s figure out the way that the regulation and the governance drives innovation. …
S25
Informal Stakeholder Consultation Session — Because without dealing with this too much, with the emergence of artificial intelligence and other technologies that ar…
S26
Agenda item 6 — Djibouti:Thank you, Chairman. At the outset, allow me also to thank you for the sincere words of recognition with which …
S27
What is it about AI that we need to regulate? — Cross-Border Content Moderation: Regional Cooperation and Coordination MechanismsThe discussions across multiple IGF 202…
S28
M e t e o r o l o g i c a l O r g a n i z a t i o n — Consumption of energy varies directly with changes in weather. Electricity facilities are subject to damage and s…
S29
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Energy management is crucial as energy resources are finite, with strong environmental implications There is unanimous …
S30
Keynote-Surya Ganguli — For example, it directly uses Maxwell’s equations of electromagnetism to do addition, instead of using complex energy -h…
S31
High-Level Session 2: Transforming Health: Integrating Innovation and Digital Solutions for Global Well-being — Emma Theofelus emphasised the need to understand different regional contexts and needs when developing digital identity …
S32
https://dig.watch/event/india-ai-impact-summit-2026/how-ai-is-transforming-indias-workforce-for-global-competitivene — flows, how operational controls shape risk over time and when to intervene. Then I think we have to make governance inte…
S33
WS #462 Bridging the Compute Divide a Global Alliance for AI — The speakers demonstrated remarkably high consensus on the need for multi-stakeholder collaboration, the self-perpetuati…
S34
What policy levers can bridge the AI divide? — ## Forward-Looking Perspectives ## Infrastructure as Foundation ## Key Challenges and Opportunities **Additional spea…
S35
A bottom-up approach: IG processes and multistakeholderism | IGF 2023 Open Forum #23 — Although the principle of multi-stakeholder engagement has been widely adopted in the UN and other institutions, there i…
S36
Collaborative AI Network – Strengthening Skills Research and Innovation — “We’re talking of AI being a possible DPI, a digital public infrastructure.”[1]. “I think those are aspects which a DPI …
S37
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — For example, supporting languages that are not commercially viable as such. Institutionalizing governance. Governance fr…
S38
Digital Governance 3.0 — In ensuring accurate reflection of the main text and adherence to UK spelling and grammar, no discrepancies were found. …
S39
5th ‘Road to Bern via Geneva’ dialogue: On data and Tech4Good — Hsuplaced the fostering of local entrepreneur ecosystems as an element of central importance. The availability of fundam…
S40
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — ## From Consumers to Producers: Transforming Global South Participation ### Financing Innovation and Risk Distribution …
S41
WS #305 Financing Self Sustaining Community Connectivity Solutions — ## Investment Readiness and Market Analysis Brian Vo, Chief Investment Officer at Connect Humanity, and Nathalia Fodits…
S42
A Guide for Practitioners — Incompatible existing policies. No strategic initiative takes place in a policy vacuum. Existing policies of…
S43
NRIs MAIN SESSION: DATA GOVERNANCE — Trust in the data governance process is vital. The speakers highlight the importance of using data for the benefit of ev…
S44
The Challenges of Data Governance in a Multilateral World — An advocate in the discussion strongly supports data governance models that prioritize cooperation, privacy, and the com…
S45
Decoding the UN CSTD Working Group on Data Governance – draft — Political context:Stated that politics lurks in the background of the work, leading to divergent views on the meaning an…
S46
How to construct a global governance architecture for digital trade — Current governance arrangements that underpin data flows are incoherent and fragmented, reflecting conflicting private i…
S47
WS #208 Democratising Access to AI with Open Source LLMs — Daniele Turra: Yeah, I’ll try to be very brief. So one key difference that we can see in open LLMs when it comes to t…
S48
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — -Infrastructure and Compute Requirements for Sovereign AI: The panel extensively discussed India’s need for massive GPU …
S49
Indias Roadmap to an AGI-Enabled Future — And the key observation is that these environments, you know, it can scale with humans and CPUs and not necessarily GPUs…
S50
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Agarwal explained that while India has strong talent and skills, they faced challenges with compute infrastructure and d…
S51
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S52
Global AI Policy Framework: International Cooperation and Historical Perspectives — This powerful framing served as a compelling conclusion that tied together many threads from the discussion – sovereignt…
S53
Agents of Change AI for Government Services & Climate Resilience — Srinivas Tallapragada introduced an important distinction between strategic sovereignty and technical sovereignty that p…
S54
Open Forum #26 High-level review of AI governance from Inter-governmental P — These key comments shaped the discussion by broadening its scope from purely technical considerations to encompass ethic…
S55
Building Public Interest AI Catalytic Funding for Equitable Compute Access — So how we can consider capability diffusion focusing on joint research, shared standards, open platforms and mutual lear…
S56
Press Conference: Closing the AI Access Gap — Data strategies are another critical aspect in the AI era. Countries need robust data strategies that include sharing fr…
S57
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Adham Abouzied emphasized the need for comprehensive governance structures that encourage data and intellectual property…
S58
Open Forum #21 Leveraging Citizen Data for Inclusive Digital Governance — Data governance | Privacy and data protection Participatory design principles, importance of citizen involvement in how…
S59
HIGH LEVEL LEADERS SESSION I — Capacity building for policy oversight and management of partnerships is considered crucial. Government institutions nee…
S60
How can sandboxes spur responsible data-sharing across borders? (Datasphere Initiative) — Promoting policies that enable responsible and interoperable cross-border data transfers, access, and sharing is of para…
S61
Dare to Share: Rebuilding Trust Through Data Stewardship | IGF 2023 Town Hall #91 — Enforcement capacity plays a crucial role in supporting data sharing mechanisms, frameworks, and policies. In the US, th…
S62
Connecting open code with policymakers to development | IGF 2023 WS #500 — Access to credible data from government sources and other reliable sources is essential, but often limited. Efficient po…
S63
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Demands on policy exist without the building blocks to support its implementation Lack of infrastructure, skills, compu…
S64
The Foundation of AI Democratizing Compute Data Infrastructure — The emphasis on community participation, data sovereignty, and alternative technical architectures suggests AI developme…
S65
How AI Is Transforming Indias Workforce for Global Competitivene — flows, how operational controls shape risk over time and when to intervene. Then I think we have to make governance inte…
S66
IN CONVERSATION WITH MICHELE JAWANDO — In summary, Isabelle Kumar appreciates the Omidyar Network’s commitment to building more inclusive and equitable societi…
S67
WS #462 Bridging the Compute Divide a Global Alliance for AI — The speakers demonstrated remarkably high consensus on the need for multi-stakeholder collaboration, the self-perpetuati…
S68
What policy levers can bridge the AI divide? — ## Infrastructure as Foundation ## Key Challenges and Opportunities **Additional speakers:** Lacina Kone: Before talk…
S69
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This disagreement is unexpected because both speakers are addressing AI security concerns, but they have fundamentally d…
S70
AI in Africa: Beyond the algorithm — Kate Kallot: We are living through a time where entire regions are at risk of being left out of the future. And that’s n…
S71
Building Public Interest AI Catalytic Funding for Equitable Compute Access — I recognize that what I’ve just shared with you is maybe not where the dominant private sector conversation is. And to t…
S72
Collaborative AI Network – Strengthening Skills Research and Innovation — “We’re talking of AI being a possible DPI, a digital public infrastructure.”[1]. “I think those are aspects which a DPI …
S73
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — Countries around the world have made investments into digital public infrastructure (DPI) that supports vital society-wi…
S74
Press Conference: Closing the AI Access Gap — The governance, alongside the talent, the compute, the infrastructure, is an enabler of responsible innovation
S75
Digital Governance 3.0 — Dr. Bruno Lanvin:Thank you, Danil. Hello and good morning, everybody. So my name is Bruno Lanvin, and I’m a French econo…
S76
AI for Social Good Using Technology to Create Real-World Impact — And I think that’s what we’re doing. And to give you another example of how it reduces the complexity, there’s a very in…
S77
Building Scalable AI Through Global South Partnerships — And this particular event gave us that opportunity. I think we were very clear that what we wanted to do was to let peop…
S78
WS #305 Financing Self Sustaining Community Connectivity Solutions — ## Investment Readiness and Market Analysis Brian Vo, Chief Investment Officer at Connect Humanity, and Nathalia Fodits…
S79
https://dig.watch/event/india-ai-impact-summit-2026/building-public-interest-ai-catalytic-funding-for-equitable-compute-access — And that’s where the investment readiness comes in. So we’re talking to countries, and we’ve had this conversation with …
S80
Overzicht acties — Zelfs wanneer er voldoende concurrentie is komen private investeringen in sommige gevallen lastig tot stand. Los van geo…
S81
AI Innovation in India — The tone was consistently celebratory, inspirational, and optimistic throughout the discussion. Speakers expressed pride…
S82
The Global Power Shift India’s Rise in AI & Semiconductors — The discussion maintained an optimistic and forward-looking tone throughout, with speakers expressing confidence in Indi…
S83
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S84
Using AI to tackle our planet’s most urgent problems — The tone is passionate and advocacy-driven throughout, with the speaker maintaining an urgent, morally-charged perspecti…
S85
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — The tone is consistently optimistic, collaborative, and forward-looking throughout the discussion. Speakers emphasize “l…
S86
Science as a Growth Engine: Navigating the Funding and Translation Challenge — The discussion maintained a consistently thoughtful and collaborative tone throughout. While panelists acknowledged seri…
S87
WS #208 Democratising Access to AI with Open Source LLMs — The conversation also covered the risks associated with open-sourcing, such as potential misuse and reduced incentives f…
S88
WS #83 the Relevance of Dpgs for Advancing Regional DPI Approaches — Interactive polls revealed participant priorities and concerns. When asked about top challenges, responses were evenly s…
S89
Main Session on Future of Digital Governance | IGF 2023 — Ambition gap, coordination gap, and a resource gap exist By including various voices, multi-stakeholder internet govern…
S90
Social Innovation in Action / DAVOS 2025 — This comment synthesizes the discussion by proposing a concrete solution to facilitate collaboration between different s…
S91
WS #45 Fostering EthicsByDesign w DataGovernance & Multistakeholder — The tone of the discussion was largely constructive and solution-oriented. Panelists acknowledged the complexities and c…
S92
Advancing Scientific AI with Safety Ethics and Responsibility — The discussion maintained a collaborative and constructive tone throughout, characterized by technical expertise and pol…
S93
WS #453 Leveraging Tech Science Diplomacy for Digital Cooperation — Muñoz emphasized that “science diplomacy doesn’t remain confined to policy papers. It creates concrete tools, infrastruc…
S94
Skilling and Education in AI — The tone was cautiously optimistic throughout. Speakers acknowledged both the tremendous opportunities AI presents for I…
S95
AI Governance Dialogue: Steering the future of AI — The tone is inspirational and urgent, maintaining an optimistic yet realistic perspective throughout. The speaker uses m…
S96
AI for equality: Bridging the innovation gap — The conversation maintained a consistently optimistic yet realistic tone throughout. Both speakers demonstrated enthusia…
S97
OPENING STATEMENTS FROM STAKEHOLDERS — Discussions on artificial intelligence show that technological development is not without risk.
S98
9821st meeting — Ecuador:Mr. President, I thank the United States for convening this important meeting. I also thank the Secretary Genera…
S99
(Interactive Dialogue 3) Summit of the Future – General Assembly, 79th session — Abdullah Alswaha: Excellencies, ladies and gentlemen, may the peace and blessings of God be upon you. Undoubtedly, the…
S100
From India to the Global South_ Advancing Social Impact with AI — And I think with the current government’s focus on multiple domains like logistics, maybe marine, aeronautics, aviation,…
S101
WS #226 Strengthening Multistakeholder Participation — The discussion maintained a collaborative and constructive tone throughout, with participants openly acknowledging chall…
S102
Dynamic Coalition Collaborative Session — Avri Doria: that are important in the process of enabling multi-stakeholder? Certainly. I’m always willing to talk about…
S103
AI 2.0 The Future of Learning in India — Now, we have just launched going to release one more report, usage of AI in school education. In next month, we are goin…
S104
Democratizing AI: Open foundations and shared resources for global impact — Development | Sociocultural Development | Sociocultural | Human rights Educational Initiatives and Capacity Building …
S105
OPEN MIC – Taking Stock | IGF 2023 — Participants are invited to give feedback on the meeting
S106
Presentation of outcomes to the plenary — Finally, participants were encouraged to contribute their perspectives through a feedback survey distributed via email, …
S107
Taking Stock — Audience: Yes, thank you Chengetai. My name is Wouter Natus, I represent the Dynamic Coalition on Internet Standards, Se…
S108
Diplomatic Reporting in the Internet Era — Experienced diplomat Liz Galvez guided participants through the critical skills required for both traditional and Intern…
S109
Scaling Enterprise-Grade Responsible AI Across the Global South — I think it has been a fantastic week here in Delhi participating in the AI Impact Summit. And I’ll just go back to the t…
S110
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Amb Thomas Schneider — Thomas Schneider delivered a keynote address at the AI Impact Summit in Delhi, announcing Switzerland’s role as host of …
S111
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — The AI Impact Summit held in New Delhi brought together ministers and senior officials from multiple countries for discu…
S112
WSIS+20 Open Consultation session with Co-Facilitators — Ambassador Lokaale reaffirmed that human rights enjoyed offline must be protected online as well, but acknowledged that …
S113
Press Briefing by HMIT Ashwani Vaishnav on AI Impact Summit 2026 l Day 5 — Congratulations on the declaration, sir. I just wanted to know, could you give us names of some of the countries that ha…
S114
WSIS 2018 – Moderated high-level policy session 7 — Some of the local digital divide concerns were pinpointed byMr Grigore Varanita(Director, National Regulatory Agency for…
S115
Closing Session  — Following the adoption of the Abuja Declaration in February 2025, which affirmed principles and priorities for submarine…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Deepali Khanna
2 arguments148 words per minute649 words262 seconds
Argument 1
Compute Divide – Deepali Khanna
EXPLANATION
Deepali explains that the digital divide is evolving into a compute divide, where AI progress is limited by access to GPUs, cloud capacity, and scalable compute, and that this gap will decide who leads future AI breakthroughs.
EVIDENCE
She states that AI today is constrained by infrastructure, who has access to GPUs, cloud capacity, and scalable compute, and highlights India’s mobilization of more than 38,000 GPUs as a public-interest compute ecosystem, illustrating the scale of the problem and a concrete response [4-7][14].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s mobilization of over 38,000 GPUs as a public-interest compute ecosystem illustrates the emerging compute divide and national responses to it [S14].
MAJOR DISCUSSION POINT
Compute access as a determinant of AI leadership
AGREED WITH
Dr. Saurabh Garg, Shaun Seow
Argument 2
Philanthropy Catalysis – Deepali Khanna
EXPLANATION
Deepali argues that philanthropy should act as a catalyst to reduce risk, unlock capital, and convene unlikely partnerships that accelerate equitable AI progress.
EVIDENCE
She says, “Philanthropy’s role is to be catalytic, to reduce risk, unlock capital, and convene unlikely partnerships that accelerate progress” [23-24].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Philanthropic organisations are urged to act as catalytic intermediaries that reduce risk, unlock capital and convene unlikely partnerships for public-interest AI, as outlined in the catalytic funding framework [S1] and calls for philanthropic capital to accelerate impact [S20].
MAJOR DISCUSSION POINT
Catalytic role of philanthropy in AI democratization
AGREED WITH
Andrew Sweet, Dr. Saurabh Garg
D
Dr. Saurabh Garg
3 arguments126 words per minute1172 words555 seconds
Argument 1
Compute Barrier – Dr. Saurabh Garg
EXPLANATION
Dr. Garg identifies compute as the defining barrier for AI ecosystems, noting that limited access to GPUs, accelerators, and high‑performance clusters hampers innovation and must become affordable, reliable, and distributable.
EVIDENCE
He describes compute as “today’s defining barrier” and stresses the need for shared, affordable, reliable infrastructure across geographies, linking innovators to compute resources and trustworthy AI services [69-71].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Compute scarcity is identified as the defining barrier, echoed by India’s large-scale GPU deployment and Garg’s emphasis on affordable, reliable infrastructure across geographies [S14][S4].
MAJOR DISCUSSION POINT
Compute as the primary obstacle to AI development
AGREED WITH
Deepali Khanna, Shaun Seow
Argument 2
Prioritization Model – Dr. Saurabh Garg
EXPLANATION
He proposes an intelligent prioritization model rather than rationing, suggesting that a digital public good platform can allocate compute to public‑interest projects, with philanthropy playing a key supportive role.
EVIDENCE
He states the focus is on “intelligent prioritization” not rationing, and highlights the role of philanthropic organizations in ensuring affordable compute for all, and mentions collaborative models to achieve this [109-112].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for intelligent allocation of compute without rationing is supported by discussions on combining technological and policy solutions to avoid new dependencies [S5].
MAJOR DISCUSSION POINT
Intelligent prioritization over rationing
AGREED WITH
Sushant Kumar, Vilas Dhar, Dr. Shikha Gitao
Argument 3
Model Efficiency – Dr. Saurabh Garg
EXPLANATION
Dr. Garg suggests that future AI progress may rely less on massive compute and more on smaller, domain‑specific models, urging a shift of focus toward model efficiency to democratize AI.
EVIDENCE
He references Vishal Sikka’s remark comparing gigawatt-scale compute to human caloric energy and argues that focusing on models could solve many democratization challenges [84-89].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Emphasis on domain-specific, low-power models aligns with Garg’s call for capability development and reduced energy demand, as highlighted in the AI democratizing infrastructure briefing [S4][S30].
MAJOR DISCUSSION POINT
Shift from compute‑heavy to model‑efficient AI
A
Andrew Sweet
2 arguments108 words per minute1001 words551 seconds
Argument 1
Governance Framework – Andrew Sweet
EXPLANATION
Andrew frames the need for governance frameworks that move nations from AI consumers to co‑creators and that enable data sharing for training while protecting privacy.
EVIDENCE
He asks how to move nations from being consumers to genuine co-creators and how to unlock data sets for training without compromising privacy [113-117].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Governance is deemed essential for moving nations from AI consumers to co-creators and for privacy-preserving data sharing, reflected in multiple governance-focused sources [S21][S22][S23][S24][S18][S19].
MAJOR DISCUSSION POINT
Governance to enable co‑creation and privacy‑safe data sharing
AGREED WITH
Martin Tisné, Dr. Shikha Gitao, Dr. Saurabh Garg
Argument 2
Philanthropic Capacity Building – Andrew Sweet
EXPLANATION
Andrew highlights philanthropy’s potential to aggregate demand, negotiate better cloud pricing, and subsidize compute costs, thereby building capacity for impact‑oriented AI projects.
EVIDENCE
He suggests aggregating demand to negotiate cheaper pricing with cloud providers and notes that philanthropy could subsidize compute costs to make AI more accessible for startups and impact organisations [331-334].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Philanthropy’s role in aggregating demand, negotiating cloud pricing and subsidising compute matches recommendations for catalytic funding and multi-stakeholder collaboration [S1][S20].
MAJOR DISCUSSION POINT
Philanthropy as a lever for scaling compute access
AGREED WITH
Deepali Khanna, Dr. Saurabh Garg
S
Shaun Seow
2 arguments159 words per minute576 words217 seconds
Argument 1
Cross‑Border Compute Limits – Shaun Seow
EXPLANATION
Shaun explains that physical distance, latency, and data‑residency regulations make direct sharing of compute resources between countries such as Indonesia and India technically infeasible.
EVIDENCE
He notes latency of 50-100 ms over 10,000 km and data-residency requirements that prevent compute sharing, concluding that such cross-border sharing “doesn’t work” [324-328].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Technical latency and data-residency regulations that hinder cross-border compute sharing are mirrored in calls for regional cooperation and interoperable frameworks [S27][S23].
MAJOR DISCUSSION POINT
Technical and regulatory limits to cross‑border compute sharing
Argument 2
Energy Constraint – Shaun Seow
EXPLANATION
Shaun points out that energy availability is a fundamental bottleneck for AI infrastructure, though renewable sources in Asia are helping to lower costs.
EVIDENCE
He identifies energy as the “stumbling block” at the bottom level and mentions that hydro, solar and wind have driven down costs for many Asian countries [314-316].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Energy availability is identified as a fundamental bottleneck for AI infrastructure, with literature emphasizing finite energy resources and their environmental implications [S29][S28][S30].
MAJOR DISCUSSION POINT
Energy as a limiting factor for AI deployment
AGREED WITH
Deepali Khanna, Dr. Saurabh Garg
D
Dr. Shikha Gitao
3 arguments174 words per minute1259 words432 seconds
Argument 1
Readiness Beyond Hardware – Dr. Shikha Gitao
EXPLANATION
Dr. Shikha argues that compute demand must be matched with talent, power, data, and concrete use cases; otherwise hardware investments waste resources.
EVIDENCE
She describes the AI Investment Readiness Index, emphasizes the need for talent, power, data, and use cases, and warns that providing GPUs alone is insufficient, citing examples of failed compute facilities due to lack of readiness [231-284].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Effective AI investment requires talent, power, data and use-case readiness, consistent with regional readiness assessments and capacity considerations [S31].
MAJOR DISCUSSION POINT
Holistic readiness (talent, power, data) over mere hardware provision
AGREED WITH
Deepali Khanna, Dr. Saurabh Garg
Argument 2
India‑Africa Compute Collaboration – Dr. Shikha Gitao
EXPLANATION
She proposes concrete South‑South collaboration where India could allocate GPU hours to African countries based on specific development needs such as health, education, or agriculture.
EVIDENCE
She gives the example of requesting 2.5 million GPU hours and discusses dialogue with India to facilitate Burundi’s needs, stressing purpose-driven compute allocation [292-298].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
South-South compute sharing proposals echo calls for African co-creation in AI and reference India’s public-interest GPU ecosystem [S25][S14].
MAJOR DISCUSSION POINT
Purpose‑driven South‑South compute sharing
AGREED WITH
Dr. Saurabh Garg, Sushant Kumar, Vilas Dhar
Argument 3
Investment Readiness – Dr. Shikha Gitao
EXPLANATION
Dr. Shikha highlights the need for countries to assess both compute demand and their capacity (power, talent, data, use cases) before investing, using indices to guide decisions.
EVIDENCE
She presents the Compute Demand Index and AI Investment Readiness Index, showing Africa’s shortfall of GPU hours and the importance of talent, power, data, and use cases for effective investment [231-284].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Use of Compute Demand and AI Investment Readiness indices to guide compute investments aligns with broader emphasis on data-driven decision-making for capacity development [S31].
MAJOR DISCUSSION POINT
Using demand and readiness indices to guide compute investment
S
Sushant Kumar
1 argument78 words per minute278 words212 seconds
Argument 1
Report Release – Sushant Kumar
EXPLANATION
Sushant announces the release of a working version of a report on opening computational resources for AI, inviting feedback and collaboration over the coming months.
EVIDENCE
He states that the team has worked hard over the last months, that a working version of the report is being released, and that they are seeking inputs, feedback, comments, and suggestions for the next few months [42-45][46-48].
MAJOR DISCUSSION POINT
Launching a report on democratizing AI resources
AGREED WITH
Dr. Saurabh Garg, Vilas Dhar, Dr. Shikha Gitao
V
Vilas Dhar
2 arguments204 words per minute1556 words456 seconds
Argument 1
Institutional Intermediaries – Vilas Dhar
EXPLANATION
Vilas emphasizes the need for new intermediary organisations that can bridge technical, policy, and governmental layers to support public‑interest AI development.
EVIDENCE
He cites Culpa Impact as an example of a group that combines technical sophistication, policy impact, and government support to connect different elements of AI ecosystems [188-190].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Building new intermediary organisations that bridge technical, policy and governmental layers is recommended in the catalytic funding framework for public-interest AI [S1].
MAJOR DISCUSSION POINT
Role of intermediary organisations in AI ecosystems
AGREED WITH
Dr. Saurabh Garg, Sushant Kumar, Dr. Shikha Gitao
Argument 2
Participatory Institutions – Vilas Dhar
EXPLANATION
Vilas calls for new, deeply participatory institutional frameworks that move beyond elite‑driven models, fostering interdependence and shared prosperity in AI diffusion.
EVIDENCE
He argues for a new institutional framework that is participatory, critiques the focus on gigawatt compute without development priorities, and stresses building interdependent capacity rather than isolated elite projects [163-166][170-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for participatory, inclusive AI institutions match governance discussions emphasizing co-creation, stakeholder involvement and flexible sovereignty models [S25][S23][S24][S18][S19].
MAJOR DISCUSSION POINT
Building inclusive, participatory AI institutions
M
Martin Tisné
2 arguments183 words per minute1162 words379 seconds
Argument 1
Data Bottleneck – Martin Tisné
EXPLANATION
Martin highlights a critical lack of innovation in data sharing mechanisms that respect privacy, creating a bottleneck that limits AI progress despite advances in compute.
EVIDENCE
He notes that while compute innovation has surged, there has been a “complete tragedy” in data innovation, stressing the need for privacy-respecting data sharing and mentioning data trusts as a possible solution [147-152].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The lack of innovative, privacy-respecting data sharing mechanisms is highlighted, with data trusts proposed as a solution to the bottleneck [S23].
MAJOR DISCUSSION POINT
Insufficient data sharing innovation hindering AI
AGREED WITH
Dr. Shikha Gitao, Andrew Sweet
Argument 2
Sovereignty & Governance – Martin Tisné
EXPLANATION
Martin distinguishes between traditional territorial sovereignty and relational data sovereignty, advocating for a flexible governance model that balances control with agency.
EVIDENCE
He discusses Westphalian sovereignty, introduces the concept of indigenous data sovereignty as relational authority, and calls for a global open resilient collaborative stack rather than a strictly controlled national stack [336-339].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Relational data sovereignty and flexible governance models that balance control with agency are advocated, reflecting discussions on data sovereignty and interoperable frameworks [S23][S24][S32].
MAJOR DISCUSSION POINT
Re‑thinking sovereignty in AI governance
AGREED WITH
Andrew Sweet, Dr. Shikha Gitao, Dr. Saurabh Garg
Agreements
Agreement Points
Compute is a critical barrier and the emerging compute divide determines AI leadership
Speakers: Deepali Khanna, Dr. Saurabh Garg, Shaun Seow
Compute Divide – Deepali Khanna Compute Barrier – Dr. Saurabh Garg Energy Constraint – Shaun Seow
All three speakers stress that access to GPUs, cloud capacity and reliable power is the main limiting factor for AI progress and that this compute divide will decide who shapes future AI breakthroughs [4-7][14][69-71][314-319].
POLICY CONTEXT (KNOWLEDGE BASE)
The concern mirrors observations about a global compute gap, such as India’s identified shortage of massive GPU infrastructure for AI development [S48] and the high compute demands of open-source large language models [S47]; similar gaps are noted as limiting AI leadership in developing regions [S63].
Philanthropy should act as a catalyst to reduce risk, unlock capital and accelerate equitable AI democratization
Speakers: Deepali Khanna, Andrew Sweet, Dr. Saurabh Garg
Philanthropy Catalysis – Deepali Khanna Philanthropic Capacity Building – Andrew Sweet Prioritization Model – Dr. Saurabh Garg
Deepali frames philanthropy as catalytic to reduce risk and convene partnerships, Andrew highlights philanthropy’s role in aggregating demand and subsidising compute, and Garg notes that philanthropic organisations can help ensure affordable compute through intelligent prioritisation [23-24][331-334][109-112].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with calls for catalytic funding mechanisms that broaden equitable compute access for public-interest AI projects [S55] and with broader perspectives on philanthropy’s role in building inclusive and equitable technology ecosystems [S66].
Robust governance frameworks are needed to move nations from AI consumers to co‑creators and to enable privacy‑respecting data sharing
Speakers: Andrew Sweet, Martin Tisné, Dr. Shikha Gitao, Dr. Saurabh Garg
Governance Framework – Andrew Sweet Sovereignty & Governance – Martin Tisné Readiness Beyond Hardware – Dr. Shikha Gitao Governance Framework – Dr. Saurabh Garg
Andrew asks how to shift from consumer to co-creator and protect privacy, Martin calls for flexible, relational sovereignty models, Shikha stresses the need for governance frameworks to link compute to use-cases, and Garg mentions governance that builds trust yet adapts to diverse contexts [113-117][336-339][274-283][72-74].
POLICY CONTEXT (KNOWLEDGE BASE)
The recommendation is consistent with multilateral data-governance proposals that stress privacy-preserving sharing mechanisms and coordinated governance architectures, as highlighted in data-governance sessions and policy briefs [S43][S44][S46][S53][S54][S57].
Data sharing is a bottleneck; privacy‑preserving mechanisms and clear use‑cases are essential
Speakers: Martin Tisné, Dr. Shikha Gitao, Andrew Sweet
Data Bottleneck – Martin Tisné India‑Africa Compute Collaboration – Dr. Shikha Gitao
Martin highlights the lack of innovation in privacy-respecting data sharing, Andrew raises the question of unlocking data for training, and Shikha proposes concrete South-South data-driven collaborations to ensure compute serves health, education or agriculture needs [147-152][292-298][113-117].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple sources emphasize that trustworthy data sharing requires privacy safeguards, clear usage agreements, and supportive policy sandboxes to unlock data flows [S43][S44][S56][S60][S61].
South‑South partnerships and new institutional intermediaries are essential for scaling public‑interest AI
Speakers: Dr. Saurabh Garg, Sushant Kumar, Vilas Dhar, Dr. Shikha Gitao
Prioritization Model – Dr. Saurabh Garg Report Release – Sushant Kumar Institutional Intermediaries – Vilas Dhar India‑Africa Compute Collaboration – Dr. Shikha Gitao
Garg describes the Maitri platform as a multi-stakeholder digital public good, Sushant announces a report inviting global-south input, Vilas calls for new intermediary organisations to bridge technical and policy layers, and Shikha outlines concrete compute-hour exchanges between India and African nations [76-79][42-45][188-190][292-298].
POLICY CONTEXT (KNOWLEDGE BASE)
The point reflects proposals for joint research, shared standards and mutual learning across regions, especially through South-South collaborations and low-cost compute offerings for researchers [S55][S64][S50].
Effective AI deployment requires holistic readiness beyond hardware, including talent, power, data and use‑cases
Speakers: Deepali Khanna, Dr. Saurabh Garg, Dr. Shikha Gitao
Readiness Beyond Hardware – Dr. Shikha Gitao Access to data, talent, and institutional capacity – Deepali Khanna Access to data, talent, and institutional capacity – Dr. Saurabh Garg
Deepali notes that democratization also depends on data, open-source models, talent and institutions; Garg echoes the need for data, talent and institutional capacity; Shikha stresses that without talent, power, data and clear use-cases hardware investments waste resources [19-21][20-21][274-283].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses stress that talent, data and supporting infrastructure are as critical as compute for AI scaling, and that gaps in skills and power supply hinder policy effectiveness [S49][S63][S65].
Similar Viewpoints
Both identify compute access as the defining barrier to AI progress and argue that democratization hinges on making GPUs and high‑performance clusters widely available [4-7][14][69-71].
Speakers: Deepali Khanna, Dr. Saurabh Garg
Compute Divide – Deepali Khanna Compute Barrier – Dr. Saurabh Garg
Both stress that without effective, privacy‑respecting data sharing mechanisms, compute resources cannot achieve meaningful impact, and they call for concrete data‑driven collaborations [147-152][292-298].
Speakers: Martin Tisné, Dr. Shikha Gitao
Data Bottleneck – Martin Tisné India‑Africa Compute Collaboration – Dr. Shikha Gitao
Both argue for new, participatory institutional models that move beyond elite‑driven, territorial sovereignty toward relational, collaborative governance of AI resources [336-339][188-190].
Speakers: Vilas Dhar, Martin Tisné
Institutional Intermediaries – Vilas Dhar Sovereignty & Governance – Martin Tisné
Both see philanthropy as a lever to address systemic constraints—Andrew through demand aggregation and subsidies, Shaun by noting energy costs and the need for affordable compute infrastructure [331-334][314-319].
Speakers: Andrew Sweet, Shaun Seow
Philanthropic Capacity Building – Andrew Sweet Energy Constraint – Shaun Seow
Unexpected Consensus
Both compute over‑capacity and under‑utilisation are problematic, highlighting the need for smarter allocation rather than simply building more hardware
Speakers: Martin Tisné, Shaun Seow
Data Bottleneck – Martin Tisné Energy Constraint – Shaun Seow
Martin worries that newly built data centres will become ‘white elephants’ with low utilisation, while Shaun argues that sharing compute across borders is technically infeasible, together revealing an unexpected agreement that simply adding compute does not solve the problem; effective allocation and local readiness are required [126-128][324-328].
Recognition that compute alone is insufficient without accompanying talent, power and data, despite Deepali’s strong emphasis on massive GPU deployment
Speakers: Deepali Khanna, Shaun Seow
Democratization – Deepali Khanna Energy Constraint – Shaun Seow
While Deepali highlights India’s large-scale GPU mobilisation as a breakthrough, Shaun points out that energy availability and other systemic factors limit the usefulness of such hardware, leading to a shared view that compute must be paired with broader ecosystem support [14][314-319].
POLICY CONTEXT (KNOWLEDGE BASE)
This view is reinforced by discussions that balanced investment across compute, human capital and data is needed for sustainable AI ecosystems, highlighting alternative technical architectures and capacity-building needs [S49][S63][S65].
Overall Assessment

There is strong consensus that compute access, governance, data sharing, philanthropy and South‑South collaboration are pivotal for democratizing AI. Speakers align on the need for intelligent prioritisation, robust multi‑stakeholder institutions and holistic readiness beyond hardware.

High consensus across most themes, indicating a solid foundation for coordinated action on public‑interest AI; the main divergences relate to technical feasibility of cross‑border compute sharing and the relative emphasis on hardware versus systemic ecosystem factors.

Differences
Different Viewpoints
Feasibility of cross‑border compute sharing
Speakers: Shaun Seow, Dr. Shikha Gitao
Cross‑Border Compute Limits — Shaun Seow India‑Africa Compute Collaboration — Dr. Shikha Gitao
Shaun argues that physical distance, latency (50-100 ms over 10,000 km) and data-residency regulations make direct sharing of compute resources between countries such as Indonesia and India technically infeasible and therefore “doesn’t work” [324-328]. In contrast, Dr. Shikha proposes a South-South collaboration where India could allocate specific GPU-hour blocks to African nations based on concrete development use-cases, treating compute as a tradable service that can be purpose-driven [292-298].
What constitutes the primary barrier to AI democratization – compute versus data versus broader readiness
Speakers: Dr. Saurabh Garg, Martin Tisné, Dr. Shikha Gitao
Compute Barrier — Dr. Saurabh Garg Data Bottleneck — Martin Tisné Readiness Beyond Hardware — Dr. Shikha Gitao
Garg identifies compute as the defining barrier, stressing the need for affordable, shared GPU infrastructure and intelligent prioritization of compute access [69-71]. Martin counters that while compute has seen rapid innovation, the real bottleneck is a lack of privacy-respecting data-sharing mechanisms and open-source ecosystem funding, which limits AI progress [147-152]. Shikha adds that hardware alone is insufficient; without talent, power, data, and concrete use-cases, GPU investments waste resources, highlighting a holistic readiness perspective [231-284].
Interpretations of sovereignty and governance in AI ecosystems
Speakers: Martin Tisné, Vilas Dhar
Sovereignty & Governance — Martin Tisné Participatory Institutions — Vilas Dhar
Martin distinguishes traditional Westphalian territorial sovereignty from relational, indigenous data sovereignty, advocating a flexible, global collaborative stack that balances control with agency [336-339]. Vilas argues for new, deeply participatory institutional frameworks that move beyond elite-driven models, emphasizing interdependence and shared prosperity rather than competition or strict territorial control [163-166][170-176]. While both address sovereignty, they differ on the conceptual focus and the institutional mechanisms required.
POLICY CONTEXT (KNOWLEDGE BASE)
This reflects ongoing discussions about strategic versus technical sovereignty, the political dimensions of data governance, and the need for inclusive global AI policy frameworks [S45][S51][S52][S53].
Unexpected Differences
Technical infeasibility of cross‑border compute versus optimism for South‑South compute sharing
Speakers: Shaun Seow, Dr. Shikha Gitao
Cross‑Border Compute Limits — Shaun Seow India‑Africa Compute Collaboration — Dr. Shikha Gitao
Shaun’s detailed technical and regulatory constraints (latency, data residency) leading him to conclude that sharing compute across countries “doesn’t work” [324-328] were unexpected given Dr. Shikha’s confident proposal that India can allocate GPU hours to African nations based on specific development needs, treating compute as a tradable service [292-298]. The contrast between a technical-first dismissal and a policy-driven collaborative model was not anticipated.
Emphasis on data governance versus compute‑centric solutions
Speakers: Martin Tisné, Dr. Saurabh Garg
Data Bottleneck — Martin Tisné Compute Barrier — Dr. Saurabh Garg
Martin stresses that the lack of innovative, privacy-respecting data-sharing mechanisms is the greatest obstacle to AI progress, despite compute advances [147-152]. Garg, however, positions compute access as the primary barrier that must be made affordable and reliable before other components can be addressed [69-71]. The shift from a data-first to a compute-first framing was not anticipated given the broader consensus on the importance of both elements.
POLICY CONTEXT (KNOWLEDGE BASE)
Contrasting viewpoints are documented: some actors prioritize privacy-preserving data-governance structures and cross-border data sharing frameworks [S43][S44][S46], while others argue that scaling compute resources is the primary lever for AI democratization [S47][S48].
Overall Assessment

The panel broadly agrees on the need to democratize AI resources and to involve philanthropy and new institutions, but diverges on where the primary bottleneck lies (compute vs data vs broader readiness), on the feasibility of cross‑border compute sharing, and on the conceptualisation of sovereignty and governance. These disagreements reflect differing priorities—technical feasibility, policy design, and institutional architecture—rather than outright conflict.

Moderate disagreement: while participants share common goals, they propose contrasting pathways (e.g., compute‑centric infrastructure versus data‑centric governance, technical infeasibility versus South‑South collaboration). The implications are that any collective action will need to reconcile these perspectives, likely through hybrid approaches that address compute, data, talent, and governance together.

Partial Agreements
All agree that democratizing AI resources requires active involvement of philanthropy and new institutional mechanisms to accelerate progress. Deepali frames philanthropy as a catalytic risk‑reducer and convenor [23-24]; Garg proposes an intelligent prioritization platform supported by philanthropic actors [109-112]; Andrew highlights philanthropy’s role in aggregating demand, negotiating cloud pricing and subsidising compute [331-334]; Vilas stresses the need for intermediary organisations that bridge technical, policy and governmental layers [188-190]. The divergence lies in the specific levers each proposes – catalytic convening, prioritization platforms, demand aggregation, or dedicated intermediaries.
Speakers: Deepali Khanna, Dr. Saurabh Garg, Andrew Sweet, Vilas Dhar
Philanthropy Catalysis — Deepali Khanna Prioritization Model — Dr. Saurabh Garg Philanthropic Capacity Building — Andrew Sweet Institutional Intermediaries — Vilas Dhar
All concur that the current AI landscape is limited by resource gaps and that addressing these gaps is essential for equitable AI development. Deepali highlights the emerging compute divide and India’s GPU mobilisation as a response [4-7][14]; Garg reiterates compute as the defining barrier needing affordable shared infrastructure [69-71]; Martin adds that despite compute advances, a lack of data‑sharing innovation is a critical bottleneck [147-152]. They differ on which gap is most urgent – compute (Deepali, Garg) versus data (Martin).
Speakers: Deepali Khanna, Dr. Saurabh Garg, Martin Tisné
Compute Divide — Deepali Khanna Compute Barrier — Dr. Saurabh Garg Data Bottleneck — Martin Tisné
Takeaways
Key takeaways
AI progress is increasingly limited by a global compute divide, not imagination. India’s AI mission (38,000 GPUs) demonstrates a public‑interest, sovereign compute infrastructure that can be a model for the Global South. Democratizing AI requires more than hardware: it also needs data access, open‑source models, talent development, and robust governance. A multi‑stakeholder digital public good platform (Maitri) was proposed to enable shared, modular compute resources across countries. South‑South partnerships (e.g., India‑Africa) are essential for scaling compute, data, and expertise without replicating North‑South dependency patterns. Philanthropy can act as a catalyst by reducing risk, unlocking capital, and supporting institutional intermediaries that connect governments, private sector, and innovators. Energy costs and model efficiency are emerging constraints; smaller, domain‑specific models may alleviate compute demand. Investment readiness (power, talent, governance, use‑cases) is as critical as raw GPU capacity for effective AI deployment.
Resolutions and action items
Release of the “Opening up Computational Resources for New AI Futures” report with a call for feedback by 31 March. Development of the Maitri platform as a voluntary, modular digital public good for shared compute, data, and governance resources. Creation of a Compute Demand Index and an AI Investment Readiness Index to quantify needs and capacity (initiated by Dr. Shikha Gitao). Philanthropic organizations to explore funding mechanisms for critical open‑source dependencies and to support institutional intermediaries (e.g., Kalpa Impact). Panelists suggested convening a working group within the next 12 months to design participatory institutions that link compute provision with talent and data readiness.
Unresolved issues
Exact governance model for treating compute as a public utility – how to prioritize and possibly price access for public‑interest projects. Mechanisms to unlock and share large, privacy‑preserving data sets across borders; scalability of data trusts or stewardship models. Technical and regulatory challenges of cross‑border compute sharing (latency, data sovereignty, energy constraints). Sustainable financing models for open‑source AI stacks beyond large corporate sponsorship. Concrete pathways for South‑South agreements that move beyond token GPU donations to integrated, outcome‑driven collaborations.
Suggested compromises
Prioritization of compute for public‑interest applications rather than strict rationing or uniform pricing. Adopt a modular, voluntary approach (Maitri) that allows countries to adopt only the components they need, respecting differing sovereignty concerns. Balance sovereign compute infrastructure with interdependence by linking compute provision to talent, data, and use‑case development. Aggregate demand across multiple countries to negotiate better terms with cloud providers, reducing costs while maintaining local control. Combine hardware provision with parallel investment in power, talent, and governance to avoid “white‑elephant” data centers.
Thought Provoking Comments
The digital divide is rapidly becoming a compute divide… Democratization is not about catching up, it is about expanding who gets to lead… India is mobilising more than 38,000 GPUs as public infrastructure – a sovereign, open, public‑interest AI ecosystem built by the Global South for the Global South.
Frames the whole session around a shift from data‑centric inequality to a concrete infrastructure gap, and positions India’s GPU programme as a model of public‑interest compute that challenges the usual private‑sector, North‑centric narrative.
Sets the agenda for the panel, prompting other speakers to discuss governance, access models and the need for South‑South collaboration. It also establishes a benchmark (38,000 GPUs) that later speakers reference when debating rationing, pricing and institutional design.
Speaker: Deepali Khanna
We identified six foundational pillars – compute, capability, collaboration, connectivity, compliance and context – and we are prototyping a digital public good called MAITRI (Multi‑Stakeholder AI for Trusted and Resilient Infrastructure) that countries can adopt, customise and build upon.
Introduces a concrete, modular framework (MAITRI) that moves the conversation from abstract “democratisation” to an actionable platform, and links technical, governance and contextual dimensions together.
Triggers discussion about shared‑infrastructure models, the role of open‑source, and how philanthropy can fund such a platform. It also provides a reference point for later comments on governance, data stewardship and institutional intermediaries.
Speaker: Dr. Saurabh Garg
My worry is that we could end up with compute capacity in many countries that become ‘white‑elephant’ data centres – unused because we lack the data, the language resources and the open‑source stack to make them valuable.
Challenges the assumption that simply building hardware solves the problem; highlights the interdependence of compute, data, and open‑source ecosystems, and warns of wasted investment.
Shifts the tone from hardware‑centric optimism to a more nuanced view, prompting other panelists (e.g., Vilas, Shikha) to stress data sovereignty, investment readiness, and the need for institutional support beyond raw GPUs.
Speaker: Martin Tisné
Sovereignty is a Westphalian concept that treats ownership of silicon as a magic bullet. True AI diffusion requires active, not passive, impact – building institutions that turn compute into locally relevant outcomes rather than relying on trickle‑down economics.
Critiques the prevailing discourse on “AI sovereignty” and “diffusion,” reframing it as a call for new participatory institutions and concrete impact pathways.
Deepens the debate on governance, leading Martin to expand on relational versus territorial sovereignty and prompting the group to consider concrete institutional designs (e.g., intermediaries like Kalpa Impact).
Speaker: Vilas Dhar
We have built a Compute Demand Index and an AI Investment Readiness Index for Africa – we need 2.5 million GPU‑hours a year, but we only have 5 % of that capacity. Without talent, power and use‑cases, even donated GPUs sit idle.
Provides hard data and a measurement framework that moves the conversation from rhetoric to quantifiable gaps, exposing the paradox of demand versus readiness.
Leads to a concrete discussion about how South‑South partnerships can be structured around measurable needs, influencing Shaun’s point about aggregating demand and Vilas’s call for institutional intermediaries.
Speaker: Dr. Shikha Gitao
Compute is actually overrated – the real bottleneck is energy and latency. Sharing compute across 10,000 km (e.g., India‑Indonesia) isn’t feasible; we should instead aggregate demand to negotiate better cloud pricing and focus on the application layer.
Challenges the central premise that compute sharing is the primary solution, introducing practical constraints (energy, latency) and suggesting alternative leverage points (demand aggregation, application focus).
Broadens the scope of the discussion to include operational realities, prompting Martin to revisit the notion of sovereignty and encouraging the panel to think about ecosystem‑wide solutions rather than just hardware provision.
Speaker: Shaun Seow
When we think about sovereignty we should move from a rigid, territorial model to a relational one – indigenous data sovereignty shows that authority can be relational, not just about control of physical infrastructure.
Introduces a sophisticated conceptual shift that links data governance, cultural rights, and AI infrastructure, expanding the conversation beyond nation‑state control.
Encourages participants to consider more inclusive governance models, influencing Vilas’s later remarks on interdependence and the need for global collaborative stacks.
Speaker: Martin Tisné (later scribble)
Overall Assessment

The discussion was driven forward by a series of pivot points that moved the conversation from a high‑level narrative about compute scarcity to concrete, multidimensional solutions. Deepali’s framing of a ‘compute divide’ and India’s GPU programme set the stage, but it was Dr. Garg’s MAITRI platform and the six‑pillar framework that gave the panel a tangible reference. Martin’s warning about ‘white‑elephant’ data centres and Vilas’s critique of simplistic sovereignty reframed the problem as one of data, open‑source ecosystems, and institutional design. Dr. Gitao’s demand and readiness indices grounded the debate in measurable gaps, while Shaun’s practical take on energy, latency and demand aggregation reminded the group of operational limits. Together, these comments redirected the dialogue from hardware‑only solutions to a holistic view that includes governance, talent, data stewardship, and new intermediary institutions, shaping a richer, action‑oriented conversation.

Follow-up Questions
Will future AI models continue to require massive compute, or will there be a shift toward smaller, domain‑specific niche models?
Understanding model size trends is crucial for forecasting compute demand and shaping democratization strategies.
Speaker: Dr. Saurabh Garg
How can we facilitate accessible and affordable computing resources by improving utilization rates, reducing transaction costs, and lowering barriers regardless of geography?
Improving access to compute is central to reducing the compute divide and enabling inclusive AI development.
Speaker: Dr. Saurabh Garg
How do we move nations from being consumers of AI to genuine co‑creators?
Shifting from consumption to creation builds local agency, aligns AI with national priorities, and prevents dependency.
Speaker: Martin Tisné
How can we unlock data sets for AI training without compromising privacy?
Addressing the data bottleneck while protecting privacy is essential for building trustworthy, high‑quality AI models.
Speaker: Martin Tisné
How can the open‑source AI ecosystem, especially critical low‑tier dependencies, be sustainably funded?
Sustainable funding ensures the health of foundational open‑source components that underpin democratized AI tools.
Speaker: Martin Tisné
How can data stewardship mechanisms (e.g., data trusts) be scaled to meet global needs?
Scalable data governance structures are needed to enable responsible data sharing across borders and sectors.
Speaker: Martin Tisné
Is there an IPL‑style playbook for building public‑interest compute institutions, or is the window closing due to commercial consolidation?
Identifying a replicable institutional model would accelerate the creation of public‑interest compute infrastructure before market forces dominate.
Speaker: Vilas Dhar
What institutions need to be built in the next 12 months to connect compute, data, talent, governance and support transformation at scale?
Specifying short‑term institutional building blocks provides a concrete roadmap for coordinated action.
Speaker: Vilas Dhar
How can reciprocal agreements between India and African countries be formalized to ensure compute infrastructure is exchanged for data access, and what would a true South‑South partnership look like?
Defining equitable South‑South partnership mechanisms ensures mutual benefit and avoids replicating North‑South power dynamics.
Speaker: Dr. Shikha Gitao
How many GPU hours can India realistically provide to African partners, and how should demand be quantified?
Quantifying compute demand and supply is necessary for planning resource sharing and investment readiness.
Speaker: Dr. Shikha Gitao
How can philanthropic networks in Asia coordinate shared compute and infrastructure resources across countries like India and Indonesia, and what mechanisms would unlock such collaboration?
Regional coordination could pool resources, reduce duplication, and increase impact across Asian economies.
Speaker: Shaun Seow
How can demand aggregation be used to negotiate better pricing with cloud providers and subsidize compute costs for impact organizations?
Aggregated demand can improve bargaining power, making compute more affordable for NGOs and startups.
Speaker: Shaun Seow
How can the skills gap in Asia be addressed to maximize AI impact?
Building talent pipelines is as important as hardware for effective AI deployment in the region.
Speaker: Shaun Seow
How can relational concepts of data sovereignty (e.g., indigenous data sovereignty) be integrated into global AI governance frameworks?
Incorporating relational sovereignty expands governance beyond territorial control, respecting community authority over data.
Speaker: Martin Tisné
What frameworks are needed to ensure public‑interest AI beyond compute, covering models, talent, data, and interoperability?
A holistic framework is required to align all components of AI ecosystems with public‑interest goals.
Speaker: Dr. Saurabh Garg

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

AI for Social Empowerment_ Driving Change and Inclusion

AI for Social Empowerment_ Driving Change and Inclusion

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined how artificial intelligence is reshaping labour markets and whether societies can afford to wait for clearer evidence before acting [1-2][14-16]. Sabina argued that firms publicly deny AI-driven job disruptions but privately acknowledge 30-40 % productivity gains that translate into workforce cuts [5-8]. She added that AI intensifies inequality and concentrates capital, citing the massive market cap of firms like NVIDIA while labour’s share of income shrinks [10-12]. Anurag questioned the source of AI investment returns, suggesting they will come either from productivity-driven labour reductions or from new products and services [32-38]. Sandhya responded that AI is prompting a redesign of roles, with coding a minor component; junior developers are becoming “AI managers” who oversee design, architecture and security rather than being eliminated [74-88]. She noted that sectors such as marketing, finance and healthcare still require human strategic oversight, and AI can boost efficiency and decision-making in these areas [93-106]. Julie emphasized that effective governance depends on strong institutions, labour research and human-centred co-creation, pointing to AI4D’s work collecting household and firm data to track AI’s real-world impacts [129-138]. She introduced the Global Index on Responsible AI, a rights-based dataset covering 138 countries that helps policymakers assess labour-related risks and design evidence-based regulations [233-242]. Sabina warned that layoffs are already occurring, that focusing solely on job counts ignores quality issues and gig-economy algorithmic management, and that broader precarity is rising [152-166]. She called for urgent policy measures-competition, antitrust, tax, labour law, social protection and skill development-especially in India where formal employment is under 10 % of the workforce [197-205][220-222]. Anurag disclosed a conflict of interest, noting his foundation’s 70 % stake in Wipro, which highlights the tension between tech growth and protecting vulnerable populations [254-262]. Sandhya argued that waiting is not an option; proactive policies, platform regulation and workforce retraining are needed, though panic must be avoided [355-363][366-367]. The discussion concluded that AI poses risks comparable to nuclear technology, yet roles requiring human wisdom, empathy and care are likely to persist, making coordinated, human-centred governance essential [406-409].


Keypoints

Major discussion points


AI will generate large productivity gains that are likely to translate into significant workforce reductions, and the scale of these impacts is already visible.


Sabina notes that companies privately admit “30 % to 40 % time-saving… which then translates into significant workforce cuts” [8-9] and points to “plenty of empirical evidence” of AI-driven surveillance and inequality [10-12]. She later stresses that “companies are laying off thousands of workers already” [152-154] and that efficiency gains “always lead to layoffs” [148-149].


The technology sector argues that AI will reshape rather than eliminate many jobs, creating new roles that focus on oversight, creativity, and human-centric skills.


Sandhya explains that coding can be handed to an AI agent, but “the success of this code… depends on a human to oversee design, architecture, security” [85-87]; junior developers become “managers of AI” [86-88]. She also highlights that in marketing, finance, healthcare, etc., AI handles routine processing while “strategic thinking… remains with humans” [94-99][104-106].


Effective governance requires strong institutions, data-driven research, and a human-centred, rights-based approach to AI.


Julie emphasizes that without “strong regulatory institutions, labor institutions, strong research ecosystems” governments cannot protect workers [130-132]. She describes the AI4D program’s work on “co-creating… with workers, communities, employers” [133-136] and the Global Index on Responsible AI that provides “country-level comparable data” on labor protection [237-242].


India (and the broader Global South) faces acute vulnerability because formal jobs are scarce and the informal sector is expanding, making AI-driven disruption especially risky.


Sabina corrects the earlier claim that only 10 % are in formal employment, stating “more than 90 % in India… are in formal employment” [197-199] and warns that “the precaritisation of the labor market… formal jobs are being gotten rid of” [208-213]. She calls for urgent action on competition policy, tax, labor law, and universal social protection [320-327].


AI is already affecting education, with concerns about cognitive decline and the need to redesign assessment and learning.


Anurag and Sabina discuss emerging “cognitive decline” among youth [313-316] and the shift back to “paper-and-pencil… in-class tests” as a response to AI-driven outsourcing of thinking [384-390]. This underscores the broader societal implications beyond the labor market.


Overall purpose / goal of the discussion


The panel convened to assess how the rapid diffusion of generative AI is reshaping labour markets, to contrast divergent views from the tech industry, labour researchers, and policy experts, and to identify concrete policy, institutional, and educational actions needed to mitigate risks, protect workers, and harness AI’s benefits-especially for vulnerable economies such as India’s.


Overall tone and its evolution


– The conversation opens with a cautious-alarmist tone, highlighting unknown impacts and urgent risks (Sabina’s “the impact is still unfolding” [1-2]; “we need to act now” [16-21]).


– It shifts to a more optimistic, industry-focused tone when Sandhya describes how AI creates new roles and augments existing work (e.g., “junior developer becomes a manager of AI” [86-88]; “strategic thinking remains with humans” [94-99]).


– The tone then becomes balanced and solution-oriented, as Julie stresses the need for strong institutions, evidence-based regulation, and collaborative governance (e.g., “without strong institutions… difficult” [130-132]; “Global Index… helps policymakers” [237-242]).


– Finally, the discussion adopts an urgent, call-to-action tone, with Sabina and the others urging immediate policy reforms, social protection, and education redesign (e.g., “we don’t have the luxury to wait” [173-176]; “we must act now” [355-363]).


Overall, the dialogue moves from warning, through optimism, to a pragmatic consensus that immediate, coordinated action is essential to steer AI’s labour impact toward inclusive outcomes.


Speakers

Julie Delahanty


– Expertise: Development research, AI policy, labor market impacts of AI


– Role/Title: President, IDRC Canada (International Development Research Centre) [S1][S2]


Sandhya Ramachandran Arun


– Expertise: Technology and AI implementation, digital transformation, consulting services


– Role/Title: Chief Technology Officer, Wipro Limited [S3][S4]


Anurag Behar


– Expertise: Philanthropy, education, social impact, AI governance


– Role/Title: Chief Executive Officer, Azeem Premji Foundation; Moderator/Chair of the panel [S5][S6]


Sabina Dewan


– Expertise: Labor market research, AI’s impact on jobs and social equity


– Role/Title: Researcher, Just Jobs Network (labor market expert) [S7][S8]


Additional speakers:


– None


Full session reportComprehensive analysis and detailed insights

Opening – Sabina Dewan – The panel began with Sabina warning that the impact of artificial intelligence on employment is “still unfolding” and that societies cannot wait for clearer evidence before acting [1-2][15-21]. She cited private firm reports from India of “30 % to 40 % time-saving… which then translates into significant workforce cuts” [8-9] and linked these efficiency gains to broader harms such as AI-enabled surveillance, biased decision-making and a “grossly exacerbating inequality” [10-11]. The concentration of capital in a few tech giants – exemplified by NVIDIA’s “$5 trillion market cap” [11-12] – is shrinking the labour share of income and raising the risk of large-scale job losses. Sabina also highlighted recent big-tech lay-offs, noting that while firms cite macro-economic shocks, AI represents “a really big disruption that comes on top of all the other disruptions” [152-158].


Panel introduction – Anurag Behar – Anurag, CEO of the Azeem Premji Foundation, framed the discussion around the economics of AI investment, describing the AI summit as “the 42nd kilometre of a marathon” and stressing that massive capital flowing into AI must be justified by monetisation [32-34]. He identified two possible sources of return – productivity-driven labour reduction or the creation of new products and services – and asked Sandhya to explain which direction the technology is heading and which jobs are likely to be displaced versus created [35-44].


Sandhya Ramachandran Arun’s view of the technology trajectory – Sandhya described AI as a “very huge impact… as a disruptor” and explained that firms are revisiting role design, hiring criteria (learnability, communication, adaptability) and reskilling programmes [48-51]. She illustrated the evolution of technology with a horse-carriage/methane to motor-vehicle analogy, arguing that just as societies governed the transition from horse-drawn carriages to automobiles, they must now govern AI’s rapid evolution [350-352]. Wipro’s experience shows that most work remains consultative, limiting large-scale displacement, and that AI solutions have been in use internally for over a year [60-63].


Coding and IT jobs – Sandhya noted that coding is only a small slice of software engineering; while AI can generate code, “the success of this code… depends on a human to oversee design, architecture, security” [85-86]. Consequently, junior developers become “managers of AI” [86-88]. Similar patterns appear in other sectors: marketing (AI creates content, humans retain strategy) [93-96]; finance (AI processes data, humans provide wisdom) [97-98]; healthcare (AI augments clinicians) [104-106].


Julie Delahanty on governance – Julie argued that effective AI-labour governance requires “strong regulatory institutions, labour institutions, strong research ecosystems” [130-132]. She highlighted the AI4D programme’s co-creation model with workers, communities and employers [133-136] and its data-collection effort in sub-Saharan Africa that gathers household, firm-level and worker information to inform skill-development, social-protection and labour-rights policies [137-138]. She also introduced the Global Index on Responsible AI, a rights-based dataset covering 138 countries with a dedicated focus on “labour protection and the right to work” [237-242], which provides evidence for concrete labour-market interventions despite the current lack of standardised regulation [241-246].


Sabina on empirical evidence – Sabina returned to the data, noting that companies are already laying off thousands of workers [152-154] and that AI adds a layer of disruption to existing macro-economic shocks [155-158]. She warned that the gig-economy’s algorithmic management leaves workers with “no mechanism for redressal” [161-164] and corrected the misconception that most Indian workers are informal, stating that “more than 90 % in India… are in formal employment” [197-199]. She emphasized that loss of even a small share of these scarce formal jobs would have “cascading effects across the economy” [214-215] and highlighted the growing “precaritisation” of the labour market, with many classified as self-employed contractors lacking health insurance or other safety nets [208-210][212-213].


Policy recommendations (Sabina) – Sabina called for urgent, coordinated action on competition policy, antitrust, transaction taxes, wealth tax, corporate tax, labour-law reform, universal social protection and massive investment in skill systems[173-176][320-334]. She noted that only 4.1 % of workers report having formal skill identification, underscoring the need for rapid upskilling [350-351].


Anurag’s conflict-of-interest disclosure – He disclosed that the Azeem Premji Foundation owns about 70 % of Wipro [255-256] and reiterated his mandate to “take care of the most vulnerable people in the country” [258-260].


Sandhya on human wisdom and platform-embedded policy – Sandhya stressed that technology cycles (investment → scaling → sailing) demand efficiency but also creativity, vision and foresight. She reiterated that regulation must be built directly into digital platforms, not only at the national level [361-364]. She also noted that a portion of Wipro’s profits funds the KMG Foundation’s welfare work [90-92].


Anurag on education – Drawing on his foundation’s role in three universities and more than 100 000 teachers[300-302], Anurag warned that AI is “attacking the very foundation of education” by encouraging both teachers and students to outsource thinking [380-382]. He cited emerging research showing increases in depression, anxiety and cognitive decline among youth, which could reduce work capacity and make them more replaceable by AI [200-202][313-316]. In response, his institutions have reverted to “paper-and-pencil, in-class tests” to preserve assessment integrity [384-390]. He likened AI’s societal reach to nuclear technology, arguing it may be even more consequential because it permeates everyday life [393-396].


Julie on the Future of Work project – Julie highlighted IDRC’s separate “Future of Work” project, which studies how work itself is being redesign-ed rather than merely counting job losses [341-346].


Consolidated urgency – Across the panel, Sabina, Sandhya and Julie repeatedly emphasized that “watching and waiting is certainly not an option” [355-358] and that immediate, evidence-based policy action is required at all levels [359-364][355-363].


Consensus & closing – The discussion concluded that AI will both displace and create jobs, but human-centred skills such as creativity, empathy and strategic oversight will remain essential. Coordinated, multi-level governance-encompassing competition, tax, labour-law reform, universal protection, platform-embedded regulation and robust skill-development programmes-is needed to steer AI’s benefits toward inclusive outcomes while mitigating its risks [355-363][320-334].


Session transcriptComplete transcript of the session
Sabina Dewan

say, you know, it’s yet to unfold. We don’t know what the impact is and it’s yet to unfold. I believe that that contention is actually largely untrue. And let me tell you why. When you talk to companies privately, publicly they will not own up to the potential job disruptions as a result of AI. And partly that is because many of the big companies actually are known to be formal job creators, right? And that is a very important part of their image and their contribution to economies and societies. But when you talk to them privately, in India especially, our research shows that they will own up to anywhere between 30 % to 40 % time saving, right, productivity gains, which then translates into significant workforce cuts.

We already have plenty of empirical evidence that suggests that… that AI systems are enabling surveillance, they’re influencing decisions about who gets work, when, and what entitlements people have access to. We also know that AI systems are grossly exacerbating inequality. If you just look at the market caps of some of the top technology companies, you know, NVIDIA’s $5 trillion market cap, right? So there’s a massive accumulation of capital that really, you know, capital share is growing and labor share of income is getting smaller and smaller. So I guess, you know, this discussion that talks about social empowerment, a key question in that is the question of the impact on jobs. And the question that I, you know, put out there is, so if you even buy the idea that we don’t know, that we don’t know what the impact is, what the impact is going to be.

Can we afford to just wait, right? Or do we need to take every action possible in terms of regulations, in terms of building social institutions, in terms of really working to build systems that can manage this inevitable evolution of AI, whether we like it or not. The last thing I’ll say is just, you know, yes, there have been technologies before. Yes, they’ve had their own forms of inclusion and exclusion. But at the end of the day, this is the first time where you have the very pioneers of that technology, Jeffrey Hinton, Stuart Russell, Dario Amadai, the very pioneers of the technology themselves are ringing alarm bells. And would we not be wise to heed them?

So with that, I hope, provocative context setting, I am really grateful. On behalf of the Just Jobs Network, again, with support from IDR. CNF CDO to welcome our really esteemed panelists. Mr. Anurag Bihar, who is the chief executive officer of the Azeem Premji Foundation, has very graciously agreed to chair this conversation, moderate the discussion. We have Dr. Julie Delhanti, who is the president of IDRC Canada. Thank you, Julie. And Ms. Sandhya Ramachandran Arun, who is the chief technology officer of Wipro Limited. Thank you so much for being here, Sandhya. So, Anurag, over to you.

Anurag Behar

Thank you. Thank you, Sabina. Good evening, everybody. Thank you. There’s so much. There’s so much investment going into AI. why is it going into a why is so much investment there in AI? We are in the fifth day of the AI summit. So this is like the 42nd kilometer of a marathon. Right? At this stage, such investment has to be justified by some monetization. And where is that monetization going to come from? It’s either going to come from productivity, which comes from labor reduction, or it is going to come from new products and services or both, a combination of both. That’s where it’s going to come from. Right? We will talk more about that. At this moment, my job is easy.

I’m going to just ask Sandhya, because she’s the representative of the technology world here really, that which way is this technology headed? And in very simple terms, what is she seeing its implications on jobs? I mean, what kind of jobs are going to get displaced, destroyed? And what kind of jobs are going to get created? and what’s the underlying dynamic because of which these jobs will be created and the jobs will be destroyed. So how does she see it in the world of technology? Let’s start with that.

Sandhya Ramachandran Arun

Sure, thank you so much. Thanks, Anurag, for the question. So as far as the tech industry is concerned, we are really witnessing a very huge impact of the AI evolution as a disruptor. We’ve had to revisit how job roles are created. We’ve had to revisit how talent has to be reskilled. And we have also revisited the responsibility, not just in terms of security, safety, but also in terms of what does it mean to our colleagues and our hiring. I think initially there was a huge amount of fear that we would not hire from colleges, which is now… despair because we’re broken. continues to hire from colleges, and so do our competitors. But the criteria for hiring has shifted to a more nuanced, a more calibrated way of looking at learnability, looking at whether a person communicates well, technical ideas, looking at whether a person is adaptable.

Because AI is a technology that is changing as we speak. So no one can claim to be an expert in AI and remain that way for the next five days, possibly, because there’s things that’s going on changing every day. With regard to our own talent, we have created role personas, and we have created very specific learning modules on how the role changes with AI. And everybody from the board to the CEO down to the youngest employee is going through a very calibrated learning process. And there is also a very… calibrated way in which services and ways of working are changing. So to that extent, we see a change. We are not seeing a displacement because most of the work that we do is consultative in nature, inspired of the market valuation erosion that we saw some time back because of a news from Anthropic and Palantir.

The insiders in the technology world were already aware of the transformative nature of these solutions coming up. And we have already been using these solutions significantly for over a year. So from a market sentiment point of view, possibly there was an erosion, but from a technology impact perspective, we have been bracing ourselves for the change and our journey of transformation continues.

Anurag Behar

I just have a follow -up on that, and then I’ll move to Julie. I’ll put it very, I mean, let’s say, a very, very simple, commonsensical question. Which is that, we are hearing about these tools where coding has become so much more easier, right? So, and this is not just about Wipro, it’s about the IT industry in general. So if coding is becoming so much easier, and 50 % or 70 % of coding can be done by these AI tools, then isn’t it inevitable that IT sector jobs will be lost? Or if there’s business or volume growth, much less hiring will happen. So that’s part one to my question. Part two is, if you move away from the IT world, and if you go to let’s say design and marketing, or, I mean, let’s say my world of the academy, the world of research, so many of research assistants and those of you who have used research assistants or work with research assistants, so much of that job is being done easily by AI.

So part one of my question, if coding is becoming so much more efficient, isn’t it inevitable jobs will be lost? so much hiding will not happen, whichever way. And aside from that, in the outside world, in other industries, what is it that you’re seeing?

Sandhya Ramachandran Arun

Sure. Let me just address the coding part of it. I think for over 15 years, the industry has been trying to explain to the outside world and as well as to the talent aspiring for careers with us that we do not have coding roles primarily. Coding is a very small task in what a software engineer does or a software developer does. There is the need to understand business outcomes. There’s a need to understand customer experience. There’s a need to understand architecture and what is a well -engineered code, right? So this is not new today. This has been in existence. I mean, I’ve been doing digital transformation for the last 15 years, and we’ve been trying to change how the world thinks about these roles.

Yes, the day is here when coding can be completely handed off to an AI agent. And that is indeed a fact, right? But the fact that supports the success of this code in business is really the ability to have a human oversee the design, the engineering, the architecture, the security, as well as delegating the coding work to an agent. So the role of a junior developer really becomes that of a little manager of AI, as opposed to saying, you’re displacing my job. The person’s actually going up if the person really is aware and aligns to what the organization needs in terms of figuring out what is required. And those are the trainings that are happening.

That’s what’s happening in terms of selection. We now have COEs inside engineering colleges where we are talking to universities about this as well. And what about other industries? whatever you’re seeing? So other industries we work with, there is a variation. So if you think about it, marketing, there’s a lot of work that gets offloaded. The strategy, the planning, the oversight on execution, the ROI on marketing still remains a strategic thinking job that remains with humans. But you can generate a lot of good quality visual, audio, and video content using AI today. And probably it’s making marketing a whole lot more efficient. Now, if you take finance, for example, again, a lot of processing gets taken over by AI, but it still needs a human to bring in wisdom in terms of how the data gets interpreted, how decisions are being made, and also to make sure that the AI aligns to human values in some sense.

So those kind of changes are happening in these functions. And that’s why I’m so excited to talk to you today. And I’m so excited to talk to you today. And I’m so excited to talk to you today. And I’m so excited to talk to you today. And I’m so excited to talk to you today. Industry -wise, there is a lot happening positive, I would say, in, say, healthcare, for example, even in banking, for example, where we are able to fight financial crimes a whole lot better. In healthcare, we are augmenting technicians, clinicians, and doctors with more intelligent input for decision -making. And while AI can make the decision, you don’t allow it to make

Anurag Behar

So, Sandhya, just put a pin on something that you said, and I’ll come back in the second round. You used the word human and wisdom. So just put a pin on that, and I’m going to come back to that in my second round. Julie, if Sandhya was less than as optimistic as she is, she wouldn’t be representing the tech world, you know. So one should expect that she’s as optimistic. she is. But what I wanted to ask you was that, you know, eventually, and, you know, from your vantage point, you know, you’re seeing how governments are dealing with this evolving situation, and not just an AI safety and, you know, all the other things, but particularly on labor markets.

So how can governments and institutions govern AI responsibly, such that any disruption in labor markets is sort of minimized or handled well, or the transition happens well? So let’s assume this picture that Sandhya has painted, that, of course, there’s something that’s going on, the reproductive, like she talked about the marketing and advising. So some people are going to lose jobs there. So what should government institutions do? How does one govern this situation, such that the benefits are maximized? And I’m talking particularly about labor. markets, not the other stuff, while harms are minimized.

Julie Delahanty

Yeah, thank you so much. I’m going to answer that question, but the last question just made me think about two things. One was, you know, I’m old enough to remember when computers first came around in the 70s and, you know, what we thought would happen with computers and the job losses that we anticipated. And, of course, we did lose jobs. There was a lot of labor disruption related to, you know, typing pools and different kinds of ways. But at the time, even home computers, nobody could even fathom what you would do with a home computer. The conversation then was that home computers would be used to develop recipes and that you’d have recipes because homes were only where homemakers were.

People couldn’t even, there’s such gendered ideas that people just could not understand what you would do with a home computer. So I think in the same way, some of what… is going to happen with AI in the labor market, we may not be able to anticipate just yet. So just as a reminder of where we came from with other important technologies. But when it comes to governance, I think the important issue is that it’s not really only about the technology, it’s really about institutions, it’s about workers, and it’s also about research. So when it comes to institutions, really without the kind of strong institutions in countries, regulatory institutions, labor institutions, strong research ecosystems that are able to really understand what’s happening in the labor market, I think it’s very difficult to end up having a strong regulation of what’s happening in the labor market.

So just those institutions are incredibly important to understanding where job losses might be, where biases might happen, and really investing in people and institutions is something that has to go hand in hand with our thinking around technologies. Another… Another area is around making sure that when we’re thinking about new technologies, that we’re making it very human -centric. And one of the things that the AI4D program does when we think, what do we mean by human -centric? It’s really about making sure that we’re co -creating new technologies with the co -creation of workers, of communities, of employers, so that we can understand how to enhance job quality, how to enhance productivity, rather than increasing inequalities or changing who benefits.

So really understanding who benefits, who’s going to face the kinds of disruptions is really important so that we’re not thinking about that as an afterthought. That we’re really shaping AI systems using that knowledge. And similarly… I think the importance of research in, and I’ll just give an example from our AI4D work is we’ve done a big research program with partners in sub -Saharan Africa that’s looking at, that’s collecting household data, firm level data, worker data, to understand what the real world impacts of AI are on labor markets. And it’s that kind of tracking, who’s going to benefit, understanding who’s going to be displaced, and how the tasks and skills are really changing that’s going to allow governments to better design and think about what kind of skills development they need, what kind of social protections they need, and how to support labor rights.

So really, I think growing AI responsibly doesn’t mean avoiding innovation or avoiding change, but it’s really about shaping AI so that it, it does strengthen labor markets and supports workers and creates more opportunities.

Anurag Behar

Thanks, Judy. Thank you so much. I’ll move to Sabina Sabina I mean since you are the labor market expert here amongst us and the researcher what is it that you see what is it that I mean there’s so much of news we have had this five days of this grand summer what is really going on what do we understand what we don’t understand in the context of the impact of AI on jobs how do you stack up

Sabina Dewan

so just a little tongue -in -cheek we go back to the 1600s we’d asked chat GPT then if Galileo was correct it would have said no way right so this technology you know for all the possibilities that it brings notwithstanding it is not just a technology we can’t just at AI as machine learning, large language models. It is a system, it is an instrument that is being utilized for social, political, and economic engineering. And my job is to look at the impact of that in labor markets. So if we limit ourselves just to the question of how many jobs will be lost, how many jobs will be gained, that’s A, not even an appropriate question.

Two, I agree with my fellow panelists that we don’t necessarily know what sort of new possibilities there might be. But what we do know, what we already see, is also something that Sundar talked about, which is the efficiency gains. And any time there are efficiency gains, there are layoffs. And please, you do the research, right? Like, I do my job. But look at the newspapers. Companies are laying off thousands of workers already. All the big tech companies have in recent years been laying off workers. Now, sure, they can say that this is a confluence of many factors. It’s not just AI, and most of them will not just ascribe it to AI. They might ascribe it to macroeconomic conditions, to the confluence of various other forces like the pandemic or trade shocks, all of which is true.

But AI is one really big disruption that comes on top of all the other disruptions, and there’s already plenty of evidence that is suggesting that these disruptions are not just changing the quantity of jobs in terms of how many companies are already laying off workers. Again, I mean, we’ve heard also projections from the, tech companies themselves, right, what the possible projections are. of disruptions and layoffs are going to be. But we also already have evidence of people being laid off. But then on top of that, I would say let’s look beyond just how many jobs are lost and how many jobs are gained to actually look at, I mean, take the gig economy, for example, and algorithmic management of gig workers.

That is a labor market issue. If a gig worker is wronged, the platform just, you know, they just get kicked off the platform. There’s no mechanism for redressal because it’s an algorithm that’s managing the worker. So who do you talk to? I mean, I can go on and on and on. Now, we might be separating out platforms from AI, but actually the algorithms are AI, and it’s embedded in a platform economy that is increasingly becoming the architecture for transactions, and it’s deeply troubling. And then the last thing I’ll say is, so I’ve already said… like in terms of quantity of jobs, we are already seeing evidence of layoffs, right? We’re already seeing the evidence of layoffs.

It’s just that people aren’t necessarily able to pinpoint and ascribe it to AI. That’s point number one. Two, we need to go beyond the question of quantity of jobs and also look at the impact of this technology on quality of jobs. And third, we need to really deeply think about, again, to Julie’s point, the architectures that can help mitigate some of the potential adverse effects of this technology, both on the quantity and the quality of jobs. And we don’t have the luxury to sit and wait and say, hey, let’s get the empirical evidence and then we’ll figure out what to do. That will be way too late, right? So what do we need? We need countries to think about competition policy.

We need to look. We need to look very closely at tax policy. We need to look very closely at how labor laws need to change. We need to look at social protection systems. We need to look at skill systems, everything that Julie just mentioned, right? But we have to start from an urgency about this is having a huge impact already. It is likely to be, you know, even bigger, and we don’t have the luxury of time to just sit back and wait and say, hey, we need more empirical evidence before we figure out how to mitigate the negative or potentially negative circumstances. So that is what I think is, you know, really, really urgent, that everyone get on that bandwagon and say we need to create these systems and ask for them and do it in our work and do it in our advocacy.

Anurag Behar

Yeah, thank you. I’ll just follow up. I’ll just follow up with it. So, and Julie, please. Pardon. for saying this. I’m saying this tongue in cheek and all my friends and colleagues here who are not from India please pardon me for what I’m going to say. So, you know, we Indians, why should we care about all this? And the reason I’m saying that is because, you know, well just about 9 or 10 % of our employment is in the formal sector. So even if there is huge disruption in labour markets, maybe 2 % of these people are going to lose their jobs, right? So why should we care about all this stuff? Do you have any comments?

Sabina Dewan

I do. You can be sure I do. You can be sure I have a comment about that. So if you look at the numbers, we are more than 90 % in India in formal employment. So Anurag’s exactly right. He knows his numbers. So, you know, essentially what you’re saying is 1 out of every 10 people stands to be potentially affected, right? That’s one way. of looking at it. The other way of looking at it is we have such few good jobs, right? We have such few jobs in the formal labor market. Only one in 10 people get to have a formal sector job. And now you’re taking that away as well, right? That stands to be disrupted. So again, we’re moving to a world of work that is much more precarious, much more insecure, much more uncertain, where workers don’t, they’re not even called workers anymore.

We call them self -employed contractors. They have no health insurance. They have, you know, this is the precaritization of the labor market. So not only do you have, you know, pandemic, climate change, energy transition, trade shocks, and AI destruction, but you have a world of work that is much more precarious, disrupting everything, but you also now are moving to a place where work is becoming more and more informal. Formal jobs are jobs are being, you know, gotten rid of in the name of, please apologize, in the name of efficiency gains, right? And so, yeah, so that’s why in India we should be really scared because we have such few formal jobs. And then imagine if you have these jobs in the IT sector in Bangalore disappearing, all the workers that used to go to bars and restaurants and get loans to buy houses and cars, that starts to disappear and it has cascading effects across the economy.

So, you know, so the impact of this is definitely in the global south. It is definitely beyond the few formal sector jobs. And it’s deeply disturbing. And we need to actually work to understand from technologists very clearly, you know, how these efficiency gains are going to happen and how they’re going to, how. What can different governments. and so on, and for architecture, public architecture, manage some of these changes. So we do need to care. Definitely need to care. We need to care urgently.

Anurag Behar

All right. So I’m going to come to Julie on this and come back to you, Sandhya, because I put a pin on something that you said, right? So, Julie, I mean, let’s assume that the alarm that Sabina is raising is at least half true, right? It’s more than half. You know, I have a deep conflict of interest, and I’ll tell you once I’m sort of done with this. So, Julie, how can, you know, what are the lessons that you’re seeing across countries, you know? You’re seeing the vast landscape, right, and IDRC has a view across the continents. So what lessons can be learned? From across the continent. such that AI is able to create opportunities, right, part of what Sandhya talked about, and doesn’t really deepen inequality or it minimizes it.

What are you seeing across the countries? Something, some good stuff.

Julie Delahanty

What is that? What is that regulation? And I think one of the – we have this AI – the Global Index on Responsible AI that some of you may have heard about. It’s been talked about a lot during the conference, or at least some of the sessions that I’ve been to. And really what that is, it’s the largest global rights -based data set on responsible AI. And what is distinctive about it is that it includes a dedicated focus on labor protection and the right to work. And by providing that country level, that sort of comparable data, it looks at 138 countries. So by providing that comparable data, it’s helping governments to understand what they might need to do better, what some of the issues are, how they can improve.

So really using that information to support governments in understanding what is the regulation, what is the solution that they need, not just – You know, it has to be based on some evidence. And I think the third big thing, which won’t be a surprise to anybody here that I’m saying this, is that we really need to have good evidence, and evidence really matters when it comes to these issues. So tools like the Global Index on Responsible AI really allows policymakers to move beyond kind of the abstract must -fix regulation to assess how governance of AI actually affects people’s rights, affects their jobs, affects their working conditions, and supports more proactive policymaking on labor regulations, again, skills, social protections, et cetera.

And I think equally important is that we’re still learning. There is no standardized, here is the regulation that you need codified. Through the kind of work that we’re doing, I think we’re learning what’s the balance between… supporting innovation… and still supporting regulation and safety. And I think working together across many countries to share that kind of information is what’s going to support us in finding the right tools.

Anurag Behar

Thanks, Julie. I’m going to come to you, Sandhya. But I just want to disclose something to all of you. That’s my conflict of interest. You know, Sabina is a labor market researcher, and naturally I would think she’s saying what she’s saying. Julie represents IDRC, and therefore she’s saying what she’s saying. Sandhya is the tech person here, so she’s saying what she’s saying. My problem is I’m responsible for this organization, Azeem Premji Foundation. And my problem is the following. My problem is that the foundation owns about 70 % of Wipro. Okay. So whatever is good for a tech company. is good for us, right? On the other hand, my job is not to take care of the technology and this world.

My job is to take care of the most vulnerable people in the country, right? The very poorest, the most marginalized, those who have no recourse to social protection. That’s my job. So I am a deeply conflicted person, right? Very deeply conflicted person. And I wanted to disclose that because I’m going to come to that towards the end. And it has a specific bearing on the question that I’m going to ask Sandhya, which is, you said something fascinating. And I want to put a pin on that. And I’m pulling your leg, you know, which is that rarely do you hear such words from a tech person. She talked about human care and wisdom, right? Didn’t she?

Okay. So, you know, really, my takeaway from what you were saying is that the tech tech stuff, you know, the coding and that kind of stuff, that can get automated. but something that is human understanding people understanding desires how do you work with people that’s what is hard to do and that’s something that you’re already seeing right so would you want to sort of comment on that

Sandhya Ramachandran Arun

yeah so the stereotype of techies aren’t human is a little unfair I think so don’t anchor it in your heads but then yeah so where do I start at the end of the day what does technology consulting and technology services try to do they try to help our client businesses become more successful and our client businesses in turn become more successful when they are innovative when they are creative when they are growing when they are growing when they are making their business and they are doing their business and they are doing their business and they are doing their business profitably and they are doing their business and they are doing their business and they are doing their business and they are doing their business and they are doing their business Or if they have already reached a state of maturity, they are trying to bring in a whole lot of efficiencies as well, right?

So it’s the S curve where you have an idea, you nail it, and then you kind of scale it, and then you kind of start sailing. And when you’re sailing, that’s when you become a big battleship and you have to focus on discipline and efficiency and ensure that you’re making profits just the same even while you’re running this big ship. But then the cycle doesn’t end there. It kind of keeps going. You keep coming up with new ideas, you keep scaling it, and you keep sailing it. And so profitability starts off with an investment, it grows, and then you have to become super efficient to remain profit. And I’m saying this to my boss because every dollar that we earn funds to the tune of about 66 cents whatever efforts the KMG Foundation uses for welfare, right?

And I think it’s a beautiful thing. It’s a beautiful thing. It’s a beautiful thing. model and I don’t think an AI could have thought of it. So therefore I do believe very strongly that creativity, wisdom, vision, foresight, human centricity is core to any technology disruptor that comes about. So if you imagine the days when there were horse carriages all the horses would have been kind of crowding the roads and people would have been going from place to place and at the end of the day you would have had a whole lot of methane which would have kind of ended the year a long time back because of global warming. But yes, vehicles did come and you did have carbon fuel and the evolution continues.

So I don’t think technology is going to stop. So human ingenuity is going to keep bringing technology disruptors. These technology disruptors are going to be more and more exponential in terms of what they can do. And it is up to humans to figure out how to create policy, how to create a governance mechanism, and how to ensure that we derive benefits, mitigate the risks, and at the same time ensure that humanity is at the center of all of this. Right? Now, this is easier said than done, but we’ve done it with nuclear energy. Despite the disasters, the fact that you and I are still alive today and thriving and living a better life than we ever lived in the last 100 years is an example that, yes, you could have accidents that are preventable, but accidents are created by humans.

And it’s up to the leadership to ensure that they put the required guardrails. It could be policy. It could be governance. It could be guidelines, whatever you call it. And you can even hire a leader. And you can even hire a leader. some of the

Anurag Behar

Yeah, it’s good to hear that, you know. I’m just going to come to one round and then perhaps have the last word, if I may. Yeah, okay. So, Sabina, what’s your take? What should we do? What should we do, really?

Sabina Dewan

So I’ve already kind of said what we should do, but first, Sandhya, everything you said really resonated with me, right? And I fully agree that, you know, that the humans have to take responsibility. I can think of a few very worrying scenarios where there are leaders in the world that have access to, you know, nuclear weapons that perhaps… shouldn’t have access to nuclear weapons, right? So how much confidence do we have in people, and particularly when you look at the overall trend of growing precarity? Again, take India alone. Fifty -eight percent of our employment is now self -employment. It is not, you know, and these are people, workers, that have no coverage of health insurance or any kind of safety net.

Add to that the fact that, like, there’s all these different forces coming that we don’t know, you know, if AI disrupts jobs or pandemics happen. We all saw what happened with migrant workers walking back to their villages, hundreds and thousands of migrant workers, right? There is a lot more precarity in the labor market than there ever has been in the past in modern history. And the problem is that regulation, and the regulation of the labor market, and the regulation of the labor market, and the regulation of the labor market, across the globe are getting weaker and weaker in this respect. And then we don’t have precedent, as Julie said. Like, we’re still trying to figure out exactly what we should do, right?

But I will say, I mean, I’ve said many, and I will say that, you know, in the meantime, AI is different because this is also the first time research is now showing, the first time that the current generation of young people have shown cognitive decline, right? So, I mean, rates of depression, rates of anxiety, cognitive decline. How does cognitive decline affect your ability to operate at work and then be replaced by machines that are more efficient because you’re getting stupider? Like, right? Sorry, but this is a really worrying scenario. So what should we do? I think I’ve said this. Multiple times. Regulation and building of social institutions. institutions, but I’ll take Julie’s challenge and say, okay, let’s go a level deeper.

I think we need to look at competition policy very closely. We need to look at antitrust. We need to look at tax, and within tax, we need to look at, you know, how do we do look at the full gamut from, you know, certain kinds of transaction taxes to what person, like a wealth tax, you know, the whole corporate tax rates, the whole gamut of tax tools that we have at our disposal. We certainly, in an area that I know well, need to look at labor regulations, right? There’s a lot of discussion now about what should happen in the gig economy, but, you know, what about, how do we, how do you distinguish if two people have lost their job, how do you distinguish, you know, between them?

You can’t say, okay, this person lost their job to AI, so we’re going to give them health care and, you know, other kinds of support, but… person we’re not right like you need to have universal systems of support for workers of health care of other forms of Social Security that that enable consumption smoothing as well so the economies keep functioning we need to invest heavily in our skill systems for all the talk I can talk about Indian numbers till I’m blue in the face of all the investment and talk about skills training in India only 4 .1 percent of respondents in our labor force survey acknowledge you know identify as having any kind of formal skills only 4 .1 percent despite you know us saying skill India and talking about investments and skills for the last you know well over a decade and a half skill systems.

There’s also well -documented research about how education, you know, the quality of education is so poor. So how do you take a young person in a remote part of India who can barely read and write, might say that I’ve graduated, I’ve done eighth grade or tenth grade, eighth class, tenth class, you know, even twelfth class, but can barely do foundational reading or math, right? How do you take them and say, I’m going to train you for AI. Yeah, that’s what I’m going to do. Like, it doesn’t work. It doesn’t work. So we need to actually fundamentally think about regulations. We need to very urgently work on our education and skill systems that meet people where they are.

We need to definitely think about universal social protection systems. That enable workers to transition between occupations from one sector to another, from one to another. to another from one occupation to another. And I can go into much more detail because this is something that my organization has worked a great deal on. What kind of systems do we need to enable workers to be better protected and be able

Anurag Behar

Thanks, Sabina. We’ve got, I think, five minutes or so, so I’m going to try and wrap up. Judy, would you want to comment?

Julie Delahanty

Yeah, I just want to make a fairly random point, I think. And that is, in addition to the Artificial Intelligence for Development program that we have, we also have a Future of Work project. And I think one of the interesting things there that we don’t talk about as much, everybody is very worried about job loss. That’s kind of the big, it’s job loss. But actually, one of the bigger issues that’s happening is rethinking how to work and ways of working and the disruption that’s happening within jobs and within the workplace. And so I think that’s a really good point. institutions and organizations, that’s not necessarily about job losses. It’s about a complete shift in the way that we do our work and how workers are going to adapt to that fundamental shift in the way that they work.

So it was just a random thought.

Anurag Behar

I don’t think it’s a random thought at all. I think it’s a salient foundational thought, you know, for this discussion. You want to comment on that one line? Because that’s such an important point.

Sabina Dewan

Yeah, no, I mean, just to say that, you know, the Future Works Collective is a global consortium of researchers that IDRC funds that JustJobs is part of that focuses exactly on that. So I agree 100 % that that is a foundational and very important issue.

Anurag Behar

Sandhya, what about you? How would you want to respond to everything Sabina has said?

Sandhya Ramachandran Arun

Look, I think… Watching and waiting is certainly not an option. I mean, we don’t want to be in a Game of Thrones situation when you’re saying winter is coming for some 22 seasons and then it comes. Nobody’s going to wait for it. So we know what’s coming, and we know what’s coming is also capable of evolving and changing tremendously. So we need to learn to change. And yes, we do need to elect good leaders. We do need to have policy at all levels. We need to have policy embedded in platforms. And of course, we need to have a lot of reimagining work and training of workforce. So yes, I think to some extent, painting doom and gloom is good.

Then we start acting, right? But to some extent, I think it also shouldn’t make you paranoid that you become deer in headlights. So yes, we should act, and we should move forward on all of that that all of us agree on.

Anurag Behar

It seems so. It seems so, absolutely. No, but, you know, I think that’s, in some senses, a very good summary, what you just now said, right? What I wanted to say was that this phrase that’s used, boomer and doomer, boomer and doomer. So in a sense, my head is the boomer and my heart is the doomer, given my role. I want to take you just for a minute, which is my job is more to do with education. So we run three universities. We work with, at any point in time, we are working with more than 100 ,000 teachers, right? And so I’m an education person. I’m not the labor market or the tech person here, right? And I am deeply concerned by the effect of AI on education, deeply, deeply concerned.

In fact, I feel that AI is attacking the very foundation of education. The very foundation of education. What AI is doing is saying, the phrase artificial intelligence, it suggests what it does, which means you essentially outsource your thinking. So teachers are outsourcing their thinking and students are outsourcing their thinking. So essentially, and that’s what Sabina was referring to, but she was referring to in the context of social media, that for the first time in this round of sort of assessments, we are seeing cognitive declines, or on test measures we are seeing declines in student performance. I cannot tell you how serious the issue is. And it’s impossible to regulate this. It’s impossible to regulate this because it’s everywhere.

So the only way we are able to deal with this, in the universities at least, is that all assessment, examination, is now returning to the old world paper and pencil, in class test. No home assignments, no project planning, no test. No project work, nothing. Just come here and sit. and write the examination. It is truly serious. I mean, we don’t know how to tackle this right now. And the reason I talk about that is I want to go back to what the analogy that Sandhya used. And I’m so glad that she did that, which is that it is as serious as the nuclear technology. It is as serious as the nuclear technology. And in one very deep way, it is far more serious than the nuclear technology because nuclear technology did not reach out and affect every individual human being.

The possibility of policies and governance to be able to circumscribe, to put boundaries, to manage, those possibilities were far greater. And the possibilities here were highly disruptive, not highly, perhaps the most disruptive of technologies is in retail form, right? This is retail transformation of humanity. It is so hard. to do this. But I’m really glad. I’m glad that with the three of you here, we have this sort of reasonable conclusion, if I may say so, that we are really facing something as serious as the nuclear technology. And you can’t run away from it. It’s happening. You can’t run away from it. Job losses will happen. We’ve got to figure a way out of it. And I would want to close on this human note, that eventually, perhaps, those jobs that require wisdom, empathy, care, human understanding, they are going to be the hardest to replace if at all.

And they will stay. And that’s what one can see in the tech world. So with that, I want to thank all three of you. Thank you so much. I want to thank all of you for coming here. Thank you very much. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (37)
Factual NotesClaims verified against the Diplo knowledge base (5)
!
Correctionhigh

“NVIDIA’s “$5 trillion market cap””

The knowledge base reports Nvidia’s market capitalisation at about $2.915 trillion in 2024, not $5 trillion [S102].

Confirmedmedium

“AI is “a really big disruption that comes on top of all the other disruptions””

The same phrasing appears in the source, confirming AI is described as a major additional disruption [S1].

Additional Contextmedium

“The horse‑carriage/motor‑vehicle analogy for governing AI’s rapid evolution”

A comparable analogy is given in the knowledge base, which likens the transition to the historical issue of horse-drawn carriages before automobiles [S111].

Additional Contextmedium

“The impact of artificial intelligence on employment is “still unfolding” and societies cannot wait for clearer evidence before acting”

The source notes that long-term employment impacts of AI remain uncertain despite current hiring stability, supporting the claim that impacts are still unfolding [S97].

!
Correctionlow

“Private firm reports from India of “30 % to 40 % time‑saving… which then translates into significant workforce cuts””

The cited source describes a 35 % productive time **loss** and revenue leakages for an Indian firm, which contradicts the claim of 30-40 % time-saving [S99].

External Sources (111)
S1
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-empowerment_-driving-change-and-inclusion — And I think equally important is that we’re still learning. There is no standardized, here is the regulation that you ne…
S2
AI for Social Empowerment_ Driving Change and Inclusion — – Julie Delahanty- Sandhya Ramachandran Arun – Sabina Dewan- Julie Delahanty
S3
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-empowerment_-driving-change-and-inclusion — And I think it’s a beautiful thing. It’s a beautiful thing. It’s a beautiful thing. model and I don’t think an AI could …
S4
AI for Social Empowerment_ Driving Change and Inclusion — – Anurag Behar- Sandhya Ramachandran Arun – Sabina Dewan- Sandhya Ramachandran Arun
S5
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-empowerment_-driving-change-and-inclusion — All right. So I’m going to come to Julie on this and come back to you, Sandhya, because I put a pin on something that yo…
S6
AI for Social Empowerment_ Driving Change and Inclusion — – Anurag Behar- Sandhya Ramachandran Arun
S7
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-empowerment_-driving-change-and-inclusion — Add to that the fact that, like, there’s all these different forces coming that we don’t know, you know, if AI disrupts …
S8
AI for Social Empowerment_ Driving Change and Inclusion — – Sabina Dewan- Sandhya Ramachandran Arun
S9
Comprehensive Report: Preventing Jobless Growth in the Age of AI — Economic | Future of work Study of LLMs in call centers showing 14% average increase in productivity, up to 35%. Studie…
S10
AI drives productivity surge in certain industries, report shows — A recent PwC (PricewaterhouseCoopers International Limited) reporthighlightsthat sectors of the global economy with high…
S11
Building Inclusive Societies with AI — “The people who are absolutely at the lower quartile, they actually need help.”[81]. “The bottom quartile is not yet plu…
S12
Discussion Report: AI Implementation and Global Accessibility — This comment shifted the conversation from discussing current disruptions to future-oriented thinking. It led Sarah to f…
S13
Upskilling for the AI era: Education’s next revolution — This comment is thought-provoking because it shifts the focus from technological capabilities to equity and access issue…
S14
How AI Drives Innovation and Economic Growth — It’s the aspirational job that created Gurgaon’s and Noida’s and Mohali’s of this country. And those people are going to…
S15
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — And this requires proactive and coherent policy responses. First, people must be at the center of AI strategy, as we hea…
S16
Optimism for AI – Leading with empathy — Regulators and government officials have responsibility to balance innovation protection with human welfare
S17
Anthropic report shows AI is reshaping work instead of replacing jobs — A new report by Anthropicsuggestsfears that AI will replace jobs remain overstated, with current use showing AI supporti…
S18
(Interactive Dialogue 2) Summit of the Future – General Assembly, 79th session — The Republic of Korea calls for developing governance frameworks for AI and emerging technologies. This is seen as neces…
S19
From principles to practice: Governing advanced AI in action — – Balancing rapid technological advancement with necessary governance frameworks across different regional approaches B…
S20
AI Governance Dialogue: Steering the future of AI — Martin used a maritime metaphor to explain current governance limitations, stating that while frameworks like the UN’s P…
S21
AI/Gen AI for the Global Goals — Shea Gopaul: So thank you, Sanda. And like Sandra, I’d like to thank the African Union, as well as Global Compact. i…
S22
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — Rees-Jones takes an optimistic view that AI can provide personalized tutoring for reskilling in areas like coding, while…
S23
AI and human creativity: Who should hold the brush? — Economic structures that value human creativity:If AI can flood the market with ‘good enough’ content at minimal cost, w…
S24
Harnessing Collective AI for India’s Social and Economic Development — Kushe Bahl believes that AI will fundamentally reshape jobs rather than just replacing them outright. He suggests this t…
S25
AI (and) education: Convergences between Chinese and European pedagogical practices — **Norman Sze** (former Chair of Deloitte China) provided industry perspective on AI’s impact on professional work, notin…
S26
Closing Ceremony — Human rights | Legal and regulatory This argument advocates for a human rights-based approach to data governance and ar…
S27
AI for Good Technology That Empowers People — Ambassador Reintam Saar from Estonia outlined the structure and objectives of the first Global Dialogue on AI Governance…
S28
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — Stakeholder engagement is identified as a vital component of responsible AI governance. The path towards effective gover…
S29
DC-CIV & DC-NN: From Internet Openness to AI Openness — Wanda Muñoz argues for a human rights-based approach to AI governance, going beyond ethics and principles. She emphasize…
S30
Building Trustworthy AI Foundations and Practical Pathways — “India has scale, India has linguistic diversity, but India also has a lot of different things.”[63]. “In many regions o…
S31
Addressing the gender divide in the e-commerce marketplace – a policy playbook for the global South (IT for Change) — In India, around 80% of the female workforce operates within the informal sector. These informal workers face numerous c…
S32
High Level Session 3: AI & the Future of Work — #### Education and Cognitive Concerns
S33
Education meets AI — Lastly, the significance of critical information and critical thinking in education was recognized, emphasising their po…
S34
Defying Cognitive Atrophy in the Age of AI: A World Economic Forum Stakeholder Dialogue — And then there is the resourcing, the possible divide in education. There could be the highly resourced private schools …
S35
The Intelligent Coworker: AI’s Evolution in the Workplace — -Workforce Impact and Career Evolution- Discussion of how AI will reshape job structures, eliminate traditional entry-le…
S36
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — Fink acknowledged that while some jobs may be displaced, new opportunities are simultaneously created. Both speakers agr…
S37
How AI Drives Innovation and Economic Growth — It’s the aspirational job that created Gurgaon’s and Noida’s and Mohali’s of this country. And those people are going to…
S38
Why science metters in global AI governance — “But if your potential or probable outcome is the end of jobs, then you need to think about universal basicism.”[113]. “…
S39
The Impact of Digitalisation and AI on Employment Quality – Challenges and Opportunities — – Employment policies should be interwoven with education, addressing both labour market demand and supply. – The impera…
S40
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — And this requires proactive and coherent policy responses. First, people must be at the center of AI strategy, as we hea…
S41
Empowering Workers in the Age of AI — This discussion featured four representatives from the International Labour Organization (ILO) presenting comprehensive …
S42
Comprehensive Report: AI’s Impact on the Future of Work – Davos 2026 Panel Discussion — Bhan argues that AI’s impact on jobs cannot be viewed in isolation but must be considered alongside broader economic dis…
S43
Building Trustworthy AI Foundations and Practical Pathways — “But similarly now, econ of maybe writing novels is gone.”[20]. “The movie industry is worried.”[21]. “That entire econo…
S44
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — These technological disparities will coincide with massive job displacement and economic disruption across all sectors s…
S45
AI job displacement: Malaysia’s strategy unveiled — The rise of AI and digitalisationcould displace up to 600,000 workersin Malaysia over the next five years, according to …
S46
AI for Social Empowerment_ Driving Change and Inclusion — – Sabina Dewan- Sandhya Ramachandran Arun- Julie Delahanty Urgent need for comprehensive policy responses including com…
S47
Trade regulations in the digital environment: Is there a gender component? (UNCTAD) — In conclusion, the analysis reinforces the potential of digitalisation and emerging technologies, such as artificial int…
S48
Contents — JD is a Chinese retailer with significant e-commerce logistics. The operation of infrastructure networks, logistics, sou…
S49
How AI Is Transforming Indias Workforce for Global Competitivene — It could mean the old world for Chennai and another enterprise, right? So, there are many reasons why adoption, I think,…
S50
Main Topic 2 – Keynotes  — Effective data collection and analysis are crucial. Transparently executed policies must consider infrastructure, digita…
S51
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Evidence-based policymaking is crucial but challenging when regulating emerging technologies, requiring sandbox environm…
S52
OECD releases AI Incidents Monitor to address AI challenges with evidence-based policies — The OECD.AI Observatoryreleaseda beta version of the AI Incidents Monitor (AIM). Designed by the OECD.AI Observatory, th…
S53
Skilling and Education in AI — The Professor took a notably realistic turn in acknowledging that AI will inevitably create new forms of inequality, des…
S54
UN High Commissioner urges human rights-centric approach to mitigate risks in AI development — While AI holds transformative potential for solving critical issues like curing cancer and addressing global warming, it…
S55
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — These efforts are crucial to prevent the exacerbation of inequality and the marginalization of vulnerable groups. Stakeh…
S56
AI for Democracy_ Reimagining Governance in the Age of Intelligence — The main areas of disagreement centered on governance mechanisms (binding vs. voluntary frameworks), institutional respo…
S57
WS #98 Towards a global, risk-adaptive AI governance framework — 3. The recognition that cultural differences play a significant role in risk perception and governance approaches. Audi…
S58
Figure I: The Global Risks Landscape 2019 — Beyond the economic risks, there are potential political and societal implications. For example, a world of increasingly…
S59
Agentic AI in Focus Opportunities Risks and Governance — Governance responses should include standards, global norms, and risk procedures, not just regulation Policy should foc…
S60
AI for Social Empowerment_ Driving Change and Inclusion — Sabina points out that AI is causing major disruptions that are already leading companies to lay off workers. Private re…
S61
The mismatch between public fear of AI and its measured impact — These cases demonstrate that AI affects different workplaces in different ways. Gains are clear in specific tasks or wor…
S62
Reinventing Digital Inclusion / DAVOS 2025 — Generative AI is expected to create exponential returns in productivity, particularly in enterprise systems. However, th…
S63
AI/Gen AI for the Global Goals — Shea Gopaul: So thank you, Sanda. And like Sandra, I’d like to thank the African Union, as well as Global Compact. i…
S64
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — Rees-Jones takes an optimistic view that AI can provide personalized tutoring for reskilling in areas like coding, while…
S65
Harnessing Collective AI for India’s Social and Economic Development — Kushe Bahl believes that AI will fundamentally reshape jobs rather than just replacing them outright. He suggests this t…
S66
Empowering Workers in the Age of AI — Focus on augmentation and transformation of existing roles rather than wholesale job replacement
S67
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Radhicka Kapoor provided a more nuanced perspective, citing research showing that while most jobs will be exposed to AI …
S68
The Future of Innovation and Entrepreneurship in the AI Era: A World Economic Forum Panel Discussion — It is changing how people get jobs and how they get hired for jobs. So an example of that is entrepreneurs often now are…
S69
Closing Ceremony — Human rights | Legal and regulatory This argument advocates for a human rights-based approach to data governance and ar…
S70
Comprehensive Report: UN General Assembly High-Level Meeting on the 20-Year Review of the World Summit on the Information Society (WSIS) Outcomes — Artificial Intelligence Governance and Ethics Human rights | Legal and regulatory Lithuania called for artificial inte…
S71
Democratizing AI Building Trustworthy Systems for Everyone — Dr. Garg highlights that the biggest challenge is governing the sharing mechanisms, protocols and the talent needed to m…
S72
First round of informal consultations with member states, observers and stakeholders (2024) — On internet governance, Denmark endorses a human rights-centred multi-stakeholder model, advocating its importance to SD…
S73
Keeping AI in check — Societies should not be forgetful of the fact that technology is a product of the human mind and that the most intellige…
S74
A Digital Future for All (morning sessions) — Robert Muggah: There are multiple risks, some of which have been discussed over the last couple of hours. Some of the…
S75
Panel Discussion AI & Cybersecurity _ India AI Impact Summit — “Yet, only countries with AI capabilities can reap actual AI benefits to their fullest potential”[31]. “A collaborative …
S76
AI disruption risk seen as lower for India’s white-collar jobs — Indiafacesa lower risk of AI-driven disruption to white-collar jobs than Western economies, IT Secretary S Krishnan said…
S77
High Level Session 3: AI & the Future of Work — #### Education and Cognitive Concerns
S78
Education meets AI — Access to devices is a critical challenge faced in disadvantaged parts of the world. The scarcity of devices leads to gr…
S79
Can AI replace the transmission of wisdom? — The world of education is changing radically and rapidly. Generative AI tools are now capable of writing essays, solving…
S80
World Economic Forum Open Forum: Visions for 2050 – Discussion Report — The discussion began with cautious optimism as panelists shared their hopes for 2050, but the tone became increasingly u…
S81
Global Risks 2025 / Davos 2025 — The tone of the discussion was initially quite sobering as the panelists discussed serious global risks and challenges. …
S82
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S83
Defying Cognitive Atrophy in the Age of AI: A World Economic Forum Stakeholder Dialogue — The discussion began with a cautiously optimistic tone, acknowledging both opportunities and risks. However, the tone be…
S84
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — The tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shif…
S85
What happens to software careers in the AI era — AI is rapidly reshaping what it means to work as a software developer, and the shift is already visible inside organisat…
S86
Upskilling for the AI era: Education’s next revolution — The tone is consistently optimistic, motivational, and action-oriented throughout. The speaker maintains an enthusiastic…
S87
Towards a Resilient Information Ecosystem: Balancing Platform Governance and Technology — – **Platform Governance and Regulatory Challenges**: Regulatory authorities face significant challenges in governing dig…
S88
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S89
WS #81 Universal Standards for Digital Infrastructure Resiliency — The tone was largely collaborative and solution-oriented. Panelists built on each other’s points and acknowledged the co…
S90
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S91
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — The tone was consistently optimistic and collaborative throughout the discussion. Speakers maintained a solution-oriente…
S92
Wrap up — This served as a compelling call to action that elevated the urgency of the entire discussion. It moved the conversation…
S93
(Interactive Dialogue 1) Summit of the Future – General Assembly, 79th session — The overall tone was one of urgency and calls for action, with many speakers emphasizing the need for immediate reforms …
S94
(Plenary segment) Summit of the Future – General Assembly, 4th plenary meeting, 79th session — The tone of the discussion was generally optimistic and forward-looking, with speakers emphasizing the need for urgent a…
S95
New Technologies and the Impact on Human Rights — The discussion maintained a collaborative and constructive tone throughout, despite addressing complex and sometimes con…
S96
Responsible AI for Children Safe Playful and Empowering Learning — The discussion concluded with a strong emphasis on urgency and action. Rather than waiting for perfect solutions, the pa…
S97
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — Long-term employment impacts remain uncertain despite current stability in hiring patterns
S98
OPENING STATEMENTS FROM STAKEHOLDERS — Discussions on artificial intelligence show that technological development is not without risk.
S99
From India to the Global South_ Advancing Social Impact with AI — So good evening. My name is Ashish Pratap Singh. I am the CEO of Prasima AI. My father runs an MSME business in Lucknow….
S100
Strengthening Worker Autonomy in the Modern Workplace | IGF 2023 WS #494 — The analysis explores the impact of technology on various social issues, including labour exploitation, inequality, pove…
S101
5. — – #6. The impact of technology on the quality and quantity of jobs
S102
Tech Diplomacy: Actors, Trends, and Controversies – Full Book — Economic dominance: Tech companies have demonstrated unprecedented economic power, often surpassing the GDPs of entire n…
S103
YCIG & DTC: Future of Education and Work with advancing tech & internet — Marko Paloski highlights the potential risk of job losses due to automation. He points out that a significant portion of…
S104
Big Tech boosts India’s AI ambitions amid concerns over talent flight and limited infrastructure — Majorannouncementsfrom Microsoft ($17.5bn) and Amazon (over $35bn by 2030) have placed India at the centre of global AI …
S105
Contents — Lorem ipsu Lorem ipsu Technological advances – notably in such fields as automation, robotics, artificial intelligence, …
S106
Rights and Permissions — This troubling scenario, however, is on balance unfounded. It is true that in some advanced economies and middle-income …
S107
https://dig.watch/event/india-ai-impact-summit-2026/ai-2-0-reimagining-indian-education-system — India has millions and trillions of problems in each and every corner. You pick up one problem, solve it. You get your d…
S108
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Adding to what just was discussed, we have a tendency to overestimate the next two years and impact and underestimate wh…
S109
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — It is very clear to me that the 2030s will be a chaotic era. There will be disruption. There will be large changes. And …
S110
Future-proofing global tech governance: a bottom-up approach | IGF 2023 Open Forum #44 — Thomas Schneider:Something that always strikes me is when you talk about how does this need to evolve, is that while tec…
S111
test marko — Comparison to New York City’s focus on horse-drawn carriage issues just before the advent of automobiles.
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Sabina Dewan
4 arguments146 words per minute2588 words1062 seconds
Argument 1
AI will drive 30‑40% productivity gains that translate into large workforce cuts, with evidence of layoffs and algorithmic gig‑economy management (Sabina)
EXPLANATION
Sabina argues that AI is expected to generate substantial productivity gains of 30‑40%, which will inevitably lead to significant reductions in workforce size. She highlights that these efficiency gains are already manifesting as large‑scale layoffs and the rise of algorithmic management in the gig economy.
EVIDENCE
She cites private research indicating that companies anticipate 30-40% time-saving and productivity gains that translate into workforce cuts [8]. She points to recent news of major tech firms laying off thousands of workers as concrete evidence of job losses [152-154]. She also references the algorithmic management of gig workers, where platforms can dismiss workers without redress, illustrating a new form of labor control [160-164].
MAJOR DISCUSSION POINT
Job displacement and productivity gains
AGREED WITH
Sandhya Ramachandran Arun, Anurag Behar, Julie Delahanty
DISAGREED WITH
Sandhya Ramachandran Arun
Argument 2
Immediate reforms are required: competition policy, antitrust, tax, labour law, universal social protection, and skill systems to mitigate AI‑driven disruption (Sabina)
EXPLANATION
Sabina calls for swift policy action across multiple domains to counteract the disruptive impact of AI on labour markets. She stresses that competition, antitrust, tax, labour regulations, universal social protection and skill development systems must be re‑engineered to protect workers.
EVIDENCE
She enumerates the need for competition policy, antitrust, tax reforms, labour regulations, universal social protection and robust skill systems as essential levers to address AI-driven disruption [320-334]. She also notes the urgency of acting now rather than waiting for more empirical evidence [176-183].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for comprehensive policy responses-including competition, tax, labour law reforms and universal social protection-is highlighted in [S2]; proactive, coherent policy frameworks that invest in skills and social protection are advocated in [S15].
MAJOR DISCUSSION POINT
Regulatory reforms for AI disruption
AGREED WITH
Sandhya Ramachandran Arun, Julie Delahanty, Anurag Behar
DISAGREED WITH
Julie Delahanty
Argument 3
Current skill levels are very low; massive upskilling for AI is unrealistic without fundamental education reform (Sabina)
EXPLANATION
Sabina highlights the severe skill gap in the labour force, noting that only a tiny fraction possess formal skills, making large‑scale AI upskilling impractical without deep reforms in education and skill systems. She argues that without addressing basic literacy and numeracy, AI‑focused training cannot succeed.
EVIDENCE
She cites labour-force survey data showing only 4.1% of respondents identify as having formal skills, and points to the poor quality of education that hampers the ability to train workers for AI roles [325-332].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI skills gap as an equity and access issue is discussed in [S13]; the importance of upskilling linked to K-12 education is noted in [S12]; and the lack of basic skills among the bottom quartile is highlighted in [S11].
MAJOR DISCUSSION POINT
Skill gaps and education challenges
Argument 4
AI intensifies inequality, surveillance, and even cognitive decline, demanding urgent precautionary measures (Sabina)
EXPLANATION
Sabina warns that AI not only threatens jobs but also deepens existing inequalities, expands surveillance, and may be linked to emerging cognitive decline among youth. She urges immediate precautionary action to mitigate these broader societal harms.
EVIDENCE
She references AI-enabled surveillance influencing work decisions and exacerbating inequality, noting the massive market caps of tech firms that concentrate capital while labour’s share shrinks [10-13]. She also cites emerging research indicating cognitive decline, higher rates of depression and anxiety among the current generation, which could make them more replaceable by machines [313-316].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Inequality concerns and the need to support the lower-quartile workforce are raised in [S11]; warnings about rapid job loss and strain on labour markets appear in [S14]; and the call for human-centred precautionary policies is echoed in [S15].
MAJOR DISCUSSION POINT
Societal risks of AI
AGREED WITH
Julie Delahanty, Sandhya Ramachandran Arun
A
Anurag Behar
4 arguments152 words per minute2047 words807 seconds
Argument 1
AI’s monetisation will come from labor reduction or new products, so both job destruction and creation are expected (Anurag)
EXPLANATION
Anurag asserts that the economic returns from AI will be derived either from increased productivity that reduces labour demand or from the creation of novel products and services. Consequently, both job losses and new employment opportunities are anticipated.
EVIDENCE
He explains that investment in AI must be justified by monetisation, which will arise either from productivity gains (i.e., labour reduction) or from new products and services, or a combination of both [34-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Productivity gains that can lead to both displacement and new roles are documented in [S9]; PwC’s findings of wage increases alongside productivity surges suggest creation of value [S10]; policy discussions acknowledging both outcomes are present in [S15].
MAJOR DISCUSSION POINT
Economic drivers of AI impact
AGREED WITH
Sabina Dewan, Sandhya Ramachandran Arun, Julie Delahanty
Argument 2
Governments must design policies that balance innovation with labour‑market protection, using human‑centred AI design (Anurag)
EXPLANATION
Anurag calls on governments and institutions to craft policies that simultaneously foster AI innovation while safeguarding workers. He emphasizes a human‑centred approach to AI governance to minimise labour market disruption.
EVIDENCE
He poses a direct question to Julie about how governments can responsibly govern AI to minimise labour-market disruption, highlighting the need for policies that protect workers while enabling innovation [112-118].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Balanced policy frameworks that protect workers while fostering innovation are outlined in [S15]; the regulator’s role in aligning innovation with human welfare is stressed in [S16]; and multiple governance initiatives are described in [S18], [S19] and [S20].
MAJOR DISCUSSION POINT
Policy design for AI and labour markets
AGREED WITH
Sabina Dewan, Sandhya Ramachandran Arun, Julie Delahanty
Argument 3
Coding efficiency raises concerns for IT employment; broader industry impacts require new training pathways (Anurag)
EXPLANATION
Anurag raises the concern that AI tools making coding easier could lead to substantial job losses in the IT sector and beyond. He asks what new training pathways are needed to address this shift across industries.
EVIDENCE
He asks whether the ability of AI to perform 50-70% of coding tasks will inevitably cause IT job losses and questions the impact on other sectors such as design, marketing, and academic research assistance [66-72].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Studies showing AI can automate 50-70% of coding tasks and boost productivity are reported in [S9]; the Anthropic report indicating AI mainly assists rather than replaces coders provides a counterpoint and broader perspective [S17].
MAJOR DISCUSSION POINT
Impact of AI on coding jobs
DISAGREED WITH
Sandhya Ramachandran Arun
Argument 4
Acknowledges personal conflict of interest and the tension between optimistic tech narratives and doomer warnings (Anurag)
EXPLANATION
Anurag openly discloses his conflict of interest, noting his foundation’s ownership stake in a tech company, and reflects on his internal tension between optimism (boomer) and pessimism (doomer) regarding AI’s societal impact.
EVIDENCE
He reveals that his foundation owns about 70% of Wipro, creating a conflict between tech interests and his mandate to protect vulnerable populations, and describes himself as a “boomer” in the head and a “doomer” in the heart, illustrating the tension between optimism and caution [247-272].
MAJOR DISCUSSION POINT
Conflict of interest and perspective tension
S
Sandhya Ramachandran Arun
4 arguments158 words per minute1684 words636 seconds
Argument 1
AI reshapes job descriptions; junior developers become “managers of AI” rather than being eliminated, and many functions remain consultative (Sandhya)
EXPLANATION
Sandhya explains that AI changes the nature of technical roles, turning junior developers into overseers who manage AI‑generated code, while many consulting‑type services remain largely unaffected. This shift reduces displacement risk for certain roles.
EVIDENCE
She notes that the role of a junior developer now becomes that of a manager of AI, overseeing design, architecture, and security while delegating coding to AI agents [86-87]. She also points out that most of Wipro’s work is consultative and therefore not experiencing large-scale displacement [60-62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Anthropic analysis that AI supports workers and changes role dynamics aligns with this transformation [S17]; coding productivity gains that enable oversight roles are described in [S9].
MAJOR DISCUSSION POINT
Job role transformation with AI
AGREED WITH
Sabina Dewan, Anurag Behar, Julie Delahanty
DISAGREED WITH
Anurag Behar
Argument 2
Policy, leadership, and governance frameworks are essential to steer AI benefits and guard against risks (Sandhya)
EXPLANATION
Sandhya stresses that effective policy, strong leadership, and robust governance structures are crucial to ensure AI delivers benefits while mitigating its risks. She likens the need for guardrails to those applied in nuclear energy.
EVIDENCE
She argues that it is up to humans to create policy, governance mechanisms, and guardrails to derive benefits and mitigate risks of AI, drawing a parallel with nuclear energy governance and emphasizing the role of leadership and guidelines [286-294].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for proactive policy, leadership and governance to harness AI benefits while mitigating risks appear in [S15] and [S16]; international calls for AI governance frameworks are detailed in [S18], [S19] and [S20].
MAJOR DISCUSSION POINT
Governance and leadership for AI
AGREED WITH
Sabina Dewan, Julie Delahanty, Anurag Behar
Argument 3
Companies are overhauling hiring criteria toward learnability and adaptability, and providing calibrated learning modules for AI‑augmented roles (Sandhya)
EXPLANATION
Sandhya describes a shift in recruitment toward assessing candidates’ learnability, communication, and adaptability, and notes that her firm has created role‑personas and specific learning modules to upskill employees for AI‑enhanced responsibilities.
EVIDENCE
She outlines that hiring criteria have moved to focus on learnability, communication, technical ideas, and adaptability [53-55], and that her organization has built role personas and calibrated learning modules to help employees adapt to AI-augmented roles [56-58].
MAJOR DISCUSSION POINT
Reskilling and hiring transformation
Argument 4
Human creativity, wisdom, and foresight are indispensable; AI will be guided by human policy and governance (Sandhya)
EXPLANATION
Sandhya argues that despite AI’s rapid evolution, human creativity, wisdom, and vision remain essential, and that policy and governance must keep humanity at the centre of AI development to ensure beneficial outcomes.
EVIDENCE
She emphasizes that creativity, wisdom, vision and human-centricity are core to any technology disruptor, and that policy, governance, and leadership are needed to keep humanity at the centre of AI’s evolution [280-287].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Emphasis on human-centred AI and the need for governance that keeps humanity at the core is found in [S16]; further support comes from governance discussions in [S18][S20].
MAJOR DISCUSSION POINT
Human‑centric AI
AGREED WITH
Sabina Dewan, Julie Delahanty
J
Julie Delahanty
4 arguments150 words per minute1060 words422 seconds
Argument 1
Past technological shifts caused both job loss and new opportunities; systematic data collection is needed to understand AI’s labor effects (Julie)
EXPLANATION
Julie reflects on historical technology waves, noting they caused both displacement and new jobs, and stresses the need for systematic, evidence‑based data collection to monitor AI’s impact on labour markets. She points to ongoing research programmes that gather household, firm and worker data.
EVIDENCE
She recalls the computer era’s job losses and the difficulty of predicting outcomes, then describes AI4D’s large research programme that collects household, firm-level and worker data in sub-Saharan Africa to track AI’s real-world labour impacts [127-129][136-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of systematic, evidence-based data collection to monitor AI’s labour impact is highlighted in [S15]; historical productivity improvements and their labour implications are discussed in [S9].
MAJOR DISCUSSION POINT
Historical perspective and data‑driven understanding
Argument 2
Strong regulatory and labour institutions, backed by research and tools like the Global Index on Responsible AI, enable evidence‑based policy (Julie)
EXPLANATION
Julie argues that robust regulatory and labour institutions, supported by research tools such as the Global Index on Responsible AI, are essential for crafting evidence‑based policies that protect workers and guide AI development responsibly.
EVIDENCE
She introduces the Global Index on Responsible AI, which covers 138 countries and provides rights-based data on labour protection, enabling governments to design better regulations; she also highlights the importance of evidence and comparative data for policy making [233-242].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Evidence-based policy making supported by robust institutions and tools is advocated in [S15]; the need for guardrails and institutional frameworks is reinforced in [S20].
MAJOR DISCUSSION POINT
Evidence‑based AI governance
AGREED WITH
Sabina Dewan, Sandhya Ramachandran Arun, Anurag Behar
DISAGREED WITH
Sabina Dewan
Argument 3
Ongoing research and country‑level evidence are needed to design effective skill‑development and social‑protection programmes (Julie)
EXPLANATION
Julie emphasizes that continuous research and country‑specific data are crucial for informing skill‑development strategies and social‑protection measures that can help workers transition in an AI‑driven economy.
EVIDENCE
She cites AI4D’s research program that gathers household, firm and worker data to understand who benefits or is displaced, informing governments on skills development, social protections and labour-rights policies [136-139].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Continuous research, country-specific data, and investment in skills and social protection programmes are emphasized in [S15]; productivity and labour impact studies in [S9] provide additional context.
MAJOR DISCUSSION POINT
Research for skill and protection policy
Argument 4
A human‑centred, co‑creation approach is required; while outcomes are uncertain, responsible design can safeguard workers (Julie)
EXPLANATION
Julie advocates for a human‑centred, co‑creation model where workers, communities and employers are involved in shaping AI systems, ensuring that technology enhances job quality and does not exacerbate inequality.
EVIDENCE
She explains that AI4D’s approach involves co-creating technologies with workers, communities and employers to improve job quality, productivity and equity, stressing the need to understand who benefits and to shape AI accordingly [132-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Human-centred, co-creation models for AI governance are promoted in [S16]; broader governance frameworks that enable responsible design are discussed in [S18][S20].
MAJOR DISCUSSION POINT
Co‑creation and human‑centred AI design
Agreements
Agreement Points
AI will cause both job displacement and creation, requiring reskilling and new role definitions.
Speakers: Sabina Dewan, Sandhya Ramachandran Arun, Anurag Behar, Julie Delahanty
AI will drive 30‑40% productivity gains that translate into large workforce cuts, with evidence of layoffs and algorithmic gig‑economy management (Sabina) AI reshapes job descriptions; junior developers become “managers of AI” rather than being eliminated, and many functions remain consultative (Sandhya) AI’s monetisation will come from labor reduction or new products, so both job destruction and creation are expected (Anurag) Past technological shifts caused both job loss and new opportunities; systematic data collection is needed to understand AI’s labour effects (Julie)
All four speakers acknowledge that AI will generate efficiency gains that can displace workers while also creating new types of work, making reskilling and redefining roles essential [8][86-87][34-38][127-129][136-138].
POLICY CONTEXT (KNOWLEDGE BASE)
This view mirrors analyses that AI will reshape job structures, eliminating some entry-level paths while creating new opportunities and demanding reskilling, as discussed in the Intelligent Coworker report [S35] and the comprehensive discussion on AI’s transformative potential [S36]; policy briefs also stress training, skilling and reskilling as core responses [S38].
Comprehensive policy, governance and institutional reforms are needed to manage AI’s impact on labour markets.
Speakers: Sabina Dewan, Sandhya Ramachandran Arun, Julie Delahanty, Anurag Behar
Immediate reforms are required: competition policy, antitrust, tax, labour law, universal social protection, and skill systems to mitigate AI‑driven disruption (Sabina) Policy, leadership, and governance frameworks are essential to steer AI benefits and guard against risks (Sandhya) Strong regulatory and labour institutions, backed by research and tools like the Global Index on Responsible AI, enable evidence‑based policy (Julie) Governments must design policies that balance innovation with labour‑market protection, using human‑centred AI design (Anurag)
The panel concurs that waiting is not an option and that coordinated regulatory, tax, competition, labour-law and social-protection measures, together with strong governance, are required to mitigate AI-driven disruption [320-334][176-183][286-294][130-138][112-118].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for broad reforms echo the AI Impact Summit 2026 which urged proactive, people-centred policy, lifelong learning and social protection [S40]; the ILO’s agenda on AI and work also highlights institutional reforms and inclusive governance [S41]; and the UN High Commissioner emphasizes human-rights-centric frameworks [S54].
Evidence‑based monitoring and data collection are crucial for informed AI labour policies.
Speakers: Julie Delahanty, Sabina Dewan, Anurag Behar
Past technological shifts caused both job loss and new opportunities; systematic data collection is needed to understand AI’s labour effects (Julie) We do not have the luxury to wait for empirical evidence before acting on AI’s impact (Sabina) What lessons are being learned across countries to create AI opportunities without deepening inequality? (Anurag)
All three stress the need for robust, country-level data and research to guide policy, warning against inaction while evidence is gathered [233-242][176-183][227-232].
POLICY CONTEXT (KNOWLEDGE BASE)
Evidence-based monitoring is highlighted in the OECD AI Incidents Monitor initiative [S52] and in the AI Policy Research Roadmap which stresses sandbox environments and multi-stakeholder data sharing [S51]; the AI Impact Summit also identified transparent data collection as essential for policy design [S50].
AI intensifies inequality and precarity; a human‑centred approach is required to protect vulnerable workers.
Speakers: Sabina Dewan, Julie Delahanty, Sandhya Ramachandran Arun
AI intensifies inequality, surveillance, and even cognitive decline, demanding urgent precautionary measures (Sabina) A human‑centred, co‑creation approach is required; responsible design can safeguard workers (Julie) Human creativity, wisdom, and foresight are indispensable; AI will be guided by human policy and governance (Sandhya)
The speakers agree that AI risks exacerbate existing inequities and that placing humans at the centre of AI design and governance is essential to mitigate those risks [10-13][313-316][132-138][280-287].
POLICY CONTEXT (KNOWLEDGE BASE)
Human-centred approaches are advocated by the UN High Commissioner for Human Rights calling for rights-based AI governance [S54] and by the IGF’s human-rights-focused AI governance framework [S55]; the AI Impact Summit likewise placed people at the centre of AI strategy to mitigate inequality [S40]; and the ILO stresses protecting vulnerable workers in the age of AI [S41].
Similar Viewpoints
Both recognise that AI will generate efficiency gains that reshape jobs, but stress that human oversight and new skill sets will be needed rather than wholesale job loss [8][86-87][53-55][56-58].
Speakers: Sabina Dewan, Sandhya Ramachandran Arun
AI will drive 30‑40% productivity gains that translate into large workforce cuts, with evidence of layoffs and algorithmic gig‑economy management (Sabina) AI reshapes job descriptions; junior developers become “managers of AI” rather than being eliminated, and many functions remain consultative (Sandhya)
Both stress that AI’s impact is mixed and that systematic evidence is needed to guide policy responses [34-38][127-129][136-138].
Speakers: Anurag Behar, Julie Delahanty
AI’s monetisation will come from labor reduction or new products, so both job destruction and creation are expected (Anurag) Past technological shifts caused both job loss and new opportunities; systematic data collection is needed to understand AI’s labour effects (Julie)
Both argue that robust governance structures and institutional capacity are prerequisite for responsible AI deployment [286-294][130-138].
Speakers: Sandhya Ramachandran Arun, Julie Delahanty
Policy, leadership, and governance frameworks are essential to steer AI benefits and guard against risks (Sandhya) Strong regulatory and labour institutions, backed by research and tools like the Global Index on Responsible AI, enable evidence‑based policy (Julie)
Unexpected Consensus
Both a tech‑industry representative (Sandhya) and a labour‑market critic (Sabina) agree that waiting for AI impacts to fully materialise is not an option.
Speakers: Sabina Dewan, Sandhya Ramachandran Arun
We need to act now; we don’t have the luxury to wait for empirical evidence (Sabina) Watching and waiting is certainly not an option; we must act now (Sandhya)
Despite their differing tones-Sabina’s cautionary stance and Sandhya’s optimistic tech perspective-both stress immediate action rather than passive observation [176-183][355-358].
POLICY CONTEXT (KNOWLEDGE BASE)
The urgency expressed aligns with the statement from the AI for Social Empowerment panel that ‘watching and waiting is certainly not an option’ [S46] and with the AI Impact Summit’s call for immediate, coherent policy responses [S40].
Overall Assessment

The panel shows strong convergence on four core themes: (1) AI will both displace and create jobs, necessitating reskilling; (2) comprehensive policy and governance reforms are essential; (3) evidence‑based monitoring is critical; (4) AI risks exacerbate inequality and require human‑centred approaches. These shared positions cut across the digital economy, capacity development, AI governance, and human‑rights domains.

High consensus – most speakers align on the need for proactive, evidence‑driven policy and human‑centred governance to manage AI’s labour impacts, indicating a unified call for coordinated action across governments, industry and research communities.

Differences
Different Viewpoints
Magnitude of AI‑driven job displacement
Speakers: Sabina Dewan, Sandhya Ramachandran Arun
AI will drive 30‑40% productivity gains that translate into large workforce cuts, with evidence of layoffs and algorithmic gig‑economy management (Sabina) AI reshapes job descriptions; junior developers become “managers of AI” rather than being eliminated, and many functions remain consultative (Sandhya)
Sabina cites private research showing 30-40% time-saving that translates into workforce cuts [8] and points to recent mass layoffs in big-tech firms [152-154] as well as algorithmic gig-economy management lacking redress mechanisms [160-164]. Sandhya counters that while coding can be handed to AI, humans must oversee design, architecture and security, turning junior developers into AI-managers, and notes that most of Wipro’s work is consultative, so large-scale displacement is limited [86-87][60-62].
POLICY CONTEXT (KNOWLEDGE BASE)
National assessments such as Malaysia’s projection of up to 600,000 displaced workers illustrate the scale of potential displacement [S45]; broader analyses warn of cross-cutting massive job losses across sectors [S44].
Impact of coding efficiency on IT employment versus role transformation
Speakers: Anurag Behar, Sandhya Ramachandran Arun
Coding efficiency raises concerns for IT employment; broader industry impacts require new training pathways (Anurag) AI reshapes job descriptions; junior developers become “managers of AI” rather than being eliminated, and many functions remain consultative (Sandhya)
Anurag asks whether AI tools that can perform 50-70% of coding tasks will inevitably lead to IT job losses [66-70]. Sandhya replies that coding can be fully automated but human oversight of design, architecture and security remains essential, turning junior developers into managers of AI, and that consultative services are largely unaffected [84-87][60-62].
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on role redesign in India highlight that AI-driven coding tools lead to organizational role changes rather than simple headcount reductions [S49]; the Intelligent Coworker report also notes transformation of IT roles rather than pure displacement [S35].
Urgency of policy reforms versus evidence‑based regulatory approach
Speakers: Sabina Dewan, Julie Delahanty
Immediate reforms are required: competition policy, antitrust, tax, labour law, universal social protection, and skill systems to mitigate AI‑driven disruption (Sabina) Strong regulatory and labour institutions, backed by research and tools like the Global Index on Responsible AI, enable evidence‑based policy (Julie)
Sabina calls for swift, comprehensive reforms across competition, tax, labour, and social protection, warning that waiting for more empirical evidence will be too late [176-183][320-334]. Julie stresses that robust institutions and systematic data (e.g., the Global Index on Responsible AI covering 138 countries) are needed to craft evidence-based policies, noting that standardized regulation does not yet exist [233-242][244-246].
POLICY CONTEXT (KNOWLEDGE BASE)
While some actors call for urgent, comprehensive reforms [S46], other analyses stress the need for evidence-based, sandbox-tested regulation and multi-stakeholder roadmaps [S51]; the governance debate at the AI for Democracy workshop reflects this tension [S56].
Unexpected Differences
Risk perception: cognitive decline and broader societal harms versus confidence in human‑centric governance
Speakers: Sabina Dewan, Sandhya Ramachandran Arun
AI intensifies inequality, surveillance, and even cognitive decline, demanding urgent precautionary measures (Sabina) Human creativity, wisdom, and foresight are indispensable; AI will be guided by human policy and governance (Sandhya)
Sabina links AI deployment to emerging cognitive decline among youth, higher rates of depression and anxiety, and calls for urgent precautionary measures [313-316]. Sandhya, while acknowledging AI’s transformative power, stresses that human creativity, wisdom and foresight will remain central and that appropriate policy and governance can keep humanity at the core, without highlighting health-related harms. The contrast between a health-focused alarm and a governance-focused optimism was not anticipated given both speakers’ expertise.
POLICY CONTEXT (KNOWLEDGE BASE)
The Global Risks Landscape identifies declining empathy and cognitive impacts as societal threats [S58]; conversely, the UN High Commissioner and AI governance guidelines emphasize human-centric safeguards to prevent such harms [S54][S59].
Overall Assessment

The discussion revealed clear divergences on how severe AI‑driven job displacement will be, with Sabina emphasizing large‑scale cuts and Sandhya portraying role transformation rather than elimination. There is also tension between calls for immediate, sweeping policy reforms and the need for evidence‑based regulation. While all participants concur on the necessity of governance, they differ on urgency, scope and the evidentiary basis of action. These disagreements highlight the challenge of aligning policy responses with differing risk assessments and stakeholder perspectives.

Moderate to high disagreement: the participants share a common goal of responsible AI governance but diverge substantially on the perceived magnitude of labour impacts and the pace and nature of policy interventions. This suggests that consensus‑building will require bridging gaps between alarmist and optimistic viewpoints, and between urgent reformist and evidence‑driven policy approaches.

Partial Agreements
All three agree that policy, governance and institutional frameworks are needed to manage AI’s impact on labour markets. Sabina pushes for urgent, sweeping reforms; Sandhya emphasizes the need for policy, leadership and guardrails; Julie highlights the role of evidence‑based regulation and data tools. Their shared goal is responsible AI governance, but they differ on the speed and evidentiary basis of action [176-183][320-334][286-294][233-242][244-246].
Speakers: Sabina Dewan, Sandhya Ramachandran Arun, Julie Delahanty
Immediate reforms are required: competition policy, antitrust, tax, labour law, universal social protection, and skill systems to mitigate AI‑driven disruption (Sabina) Policy, leadership, and governance frameworks are essential to steer AI benefits and guard against risks (Sandhya) Strong regulatory and labour institutions, backed by research and tools like the Global Index on Responsible AI, enable evidence‑based policy (Julie)
Takeaways
Key takeaways
AI is delivering 30‑40% productivity gains that are already translating into large workforce reductions and layoffs, especially in the tech sector. Job roles are being reshaped rather than simply eliminated; junior developers become managers of AI tools, and many functions remain consultative, strategic, or supervisory. Both job destruction and creation are expected; monetisation of AI will come from labor reduction and new products/services. Urgent governance is required: competition policy, antitrust, tax reforms, stronger labour laws, universal social‑protection and robust skill‑development systems. Human‑centred, co‑created AI design is essential to mitigate inequality, surveillance, and algorithmic gig‑economy exploitation. Evidence‑based policy depends on systematic data collection (e.g., Global Index on Responsible AI, AI4D research) to track labour‑market impacts. Current skill levels, particularly in India’s informal sector, are very low; massive up‑skilling is unrealistic without fundamental education reform. AI exacerbates existing precarity, inequality, and even cognitive‑health concerns, making immediate action preferable to waiting. Consensus among panelists that waiting for perfect evidence is not an option; proactive, coordinated action across government, industry, and academia is needed.
Resolutions and action items
Leverage the Global Index on Responsible AI to provide country‑level evidence for labour‑rights and AI governance reforms. Governments should strengthen regulatory and labour institutions, invest in research ecosystems, and design policies that balance innovation with worker protection. Companies (e.g., Wipro) to continue creating calibrated learning modules, role personas, and reskilling pathways that emphasize learnability, adaptability, and AI oversight. Develop universal social‑protection mechanisms (healthcare, unemployment benefits, consumption‑smoothing) to support workers displaced by AI. Implement human‑centred AI co‑creation processes involving workers, communities, and employers to ensure AI augments rather than replaces human labour. Accelerate data collection on AI’s labour impacts (household, firm‑level, worker‑level surveys) to inform skill‑development and social‑protection programmes.
Unresolved issues
Precise magnitude, timing, and sector‑specific breakdown of AI‑driven job displacement versus job creation remain unknown. Effective strategies for upskilling India’s large informal and low‑skill workforce to meet AI‑driven demand are not defined. Specific design of competition, antitrust, and tax policies to curb capital concentration and protect labour has not been detailed. Legal frameworks to address algorithmic management and redress in the gig‑economy are still lacking. How to mitigate AI‑related cognitive decline, mental‑health impacts, and broader societal well‑being has not been resolved. Balancing rapid AI innovation with regulation without stifling growth remains an open policy question.
Suggested compromises
Acknowledge that AI will cause both job losses and new opportunities; focus on transition policies rather than a purely doom‑or‑optimism narrative. Use ‘doom‑and‑gloom’ framing to motivate action while avoiding paralysis—encourage proactive policy without fostering panic. Combine tech‑sector optimism (e.g., new AI‑manager roles) with precautionary labour safeguards (social protection, skill training). Adopt incremental, evidence‑based regulations (human‑centred design, co‑creation) that allow innovation to continue while protecting workers.
Thought Provoking Comments
When you talk to companies privately, they will own up to anywhere between 30% to 40% time‑saving, which then translates into significant workforce cuts. AI systems are enabling surveillance, influencing who gets work, and grossly exacerbating inequality.
She reveals hidden, empirical data that companies acknowledge large productivity gains that likely lead to layoffs, and links AI to broader societal harms, shifting the debate from speculative to evidence‑based urgency.
Sets a tone of concern and urgency, prompting the panel to focus on regulation and social institutions rather than treating AI as a neutral technological advance.
Speaker: Sabina Dewan
What kind of jobs are going to get displaced, destroyed? And what kind of jobs are going to get created? What is the underlying dynamic because of which these jobs will be created and the jobs will be destroyed?
A direct, commonsense framing that forces the discussion from abstract AI hype to concrete labor market dynamics.
Steers the conversation toward concrete examples (coding, marketing, finance) and elicits detailed responses from Sandhya, opening the floor for nuanced analysis of job transformation versus displacement.
Speaker: Anurag Behar
The role of a junior developer becomes that of a little manager of AI, overseeing design, architecture, security, rather than being displaced.
She reframes the narrative of job loss into role evolution, highlighting how AI can augment rather than replace certain skill sets.
Introduces a nuanced perspective that tempers the alarmist view, prompting other panelists to consider upskilling and role redesign rather than outright job elimination.
Speaker: Sandhya Ramachandran Arun
Strong institutions—regulatory, labor, and research ecosystems—are essential to understand where job losses and biases happen, and to co‑create AI with workers, communities, and employers.
She shifts the focus from technology itself to the institutional capacity needed for responsible governance, emphasizing co‑creation and evidence‑based policy.
Broadens the discussion to include governance frameworks, leading to later mentions of the Global Index on Responsible AI and the need for data‑driven policy making.
Speaker: Julie Delahanty
We need to go beyond the quantity of jobs and look at the impact on the quality of work—e.g., algorithmic management in the gig economy where workers have no redressal mechanism.
Expands the labor conversation to include precarious work and platform economies, highlighting how AI changes power dynamics beyond simple headcount.
Redirects attention to the gig economy and platform governance, prompting Anurag and others to discuss informal sector vulnerabilities, especially in India.
Speaker: Sabina Dewan
In India, more than 90% of employment is informal; losing even a small share of formal jobs has cascading effects on the broader economy and deepens precarity.
She contextualizes the AI impact within the Indian labor market, challenging the assumption that AI effects are limited to the formal sector.
Shifts the conversation to the Global South, highlighting systemic risks and prompting a discussion on universal social protection and skill development.
Speaker: Sabina Dewan
The Global Index on Responsible AI provides country‑level, rights‑based data on labor protection and the right to work for 138 countries, helping governments design evidence‑based policies.
Offers a concrete, actionable tool for policymakers, moving the dialogue from abstract recommendations to tangible resources.
Creates a bridge between research and policy, reinforcing the earlier call for data‑driven governance and influencing later remarks about the need for evidence.
Speaker: Julie Delahanty
AI is attacking the very foundation of education; teachers and students are outsourcing their thinking, leading to cognitive decline and forcing a return to paper‑and‑pencil exams.
Introduces a new domain—education—where AI’s impact may be even more profound, linking cognitive health, assessment integrity, and societal outcomes.
Expands the scope of the discussion beyond labor markets, prompting reflections on the broader societal implications of AI and reinforcing the urgency for comprehensive governance.
Speaker: Anurag Behar (as education lead)
Overall Assessment

The discussion was shaped by a series of pivotal interventions that moved it from a generic hype‑centric talk to a multi‑dimensional analysis of AI’s labor, institutional, and societal impacts. Sabina’s evidence‑based alarm and focus on job quality set a critical backdrop, while Anurag’s probing questions forced concrete examinations of job displacement and creation. Sandhya’s reframing of roles and Julie’s emphasis on strong institutions and data‑driven tools introduced nuance and actionable pathways. Subsequent comments about the informal sector, gig economy, and education broadened the lens to include vulnerable populations and systemic risks. Collectively, these comments redirected the conversation toward urgent, evidence‑based policy design, highlighting the need for governance, upskilling, and protective social architectures across both developed and developing contexts.

Follow-up Questions
What specific policies should governments implement to mitigate AI-induced labor market disruptions and ensure a smooth transition for workers?
Understanding concrete policy measures is crucial for minimizing job losses and protecting workers as AI transforms the labor market.
Speaker: Anurag Behar (to Julie Delahanty)
How can we develop robust, comparable metrics (e.g., a global index) to monitor AI’s impact on labor rights and job quality across countries?
A standardized data set would help policymakers identify gaps, benchmark progress, and design evidence‑based interventions.
Speaker: Julie Delahanty
What mechanisms are needed to provide universal social protection for workers displaced by AI, especially in economies with large informal sectors like India?
Without safety nets, AI‑driven layoffs could exacerbate precarity and inequality among vulnerable populations.
Speaker: Sabina Dewan
In what ways does AI affect job quality—not just quantity—particularly in gig and platform economies where algorithmic management is prevalent?
Beyond headline job counts, AI may alter working conditions, autonomy, and fairness, requiring deeper investigation.
Speaker: Sabina Dewan
What is the relationship between reported cognitive decline among young people and their susceptibility to AI‑driven job displacement?
If AI replaces tasks that require higher cognition, declining cognitive abilities could increase vulnerability, warranting research.
Speaker: Sabina Dewan
How can education and skill development systems be redesigned to equip low‑literacy and remote populations for AI‑augmented work?
Effective upskilling is essential to ensure inclusive participation in the AI economy, especially where formal education is weak.
Speaker: Sabina Dewan
What role should competition policy and antitrust regulation play in preventing excessive concentration of AI benefits among a few large firms?
Concentration could worsen inequality; antitrust tools may be needed to maintain a competitive, inclusive market.
Speaker: Sabina Dewan
How can tax policy be leveraged to fund social protections, skill development, and other interventions needed in the AI era?
Identifying fiscal levers is important for financing the systemic changes required to mitigate AI’s disruptive effects.
Speaker: Sabina Dewan
What best practices exist for co‑creating AI systems with workers, communities, and employers to enhance job quality and protect rights?
Co‑creation can ensure AI aligns with human‑centred values and reduces adverse labor outcomes.
Speaker: Julie Delahanty
What are effective reskilling and upskilling strategies to transform junior developers into ‘managers of AI’ and similar hybrid roles?
Understanding how to redesign roles and training pathways is key for workforce adaptation to AI automation.
Speaker: Sandhya Ramachandran Arun
How does AI adoption differ across sectors (e.g., healthcare, finance, marketing) in terms of job displacement versus augmentation, and what sector‑specific research is needed?
Sector‑level insights can guide targeted policies and interventions tailored to distinct industry dynamics.
Speaker: Sandhya Ramachandran Arun
What governance frameworks are needed to embed human wisdom, empathy, and ethical considerations into AI deployment across industries?
Ensuring AI systems reflect human values is essential to prevent harmful outcomes and maintain public trust.
Speaker: Sandhya Ramachandran Arun
How can we systematically track and evaluate the long‑term effects of AI on workers’ mental health, productivity, and overall well‑being?
AI‑induced stress and cognitive changes could have broad societal impacts; longitudinal studies are required.
Speaker: Sabina Dewan
What lessons can be drawn from countries that have successfully integrated AI while minimizing inequality, and how can these be adapted to other contexts?
Cross‑country learning can inform policy design and avoid repeating mistakes in AI rollout.
Speaker: Julie Delahanty

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Population-Scale Digital Public Infrastructure for AI

Building Population-Scale Digital Public Infrastructure for AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel discussed how to scale AI for public good through “diffusion pathways,” a framework for rapidly spreading know-how, trust and institutional capability rather than just technology awareness [1-3][40-46]. Nandan Nilekani illustrated the speed gains achievable with this approach, noting that a farmer-support app took nine months to launch in Maharashtra, three months in Ethiopia, and three weeks for an Amul dairy solution, showing a dramatic reduction in rollout time [4-12]. He announced an ambition to create 100 diffusion pathways by 2030, backed by a global coalition that includes Anthropic, Google, the Gates Foundation and UNDP, and open to any participant [14-24][26-29]. Irina Ghose emphasized that diffusion succeeds when AI is delivered in the local language, fits seamlessly into users’ daily workflows, and is continuously iterated, citing tools like “co-work” that enable non-technical users to adopt AI [60-62][68-73]. She also introduced Anthropic’s Model Context Protocol (MCP) as a universal “language” for AI components, likening it to UPI’s role in payments and enabling one-time development for multi-sector deployment [250-254]. Trevor Mundeli warned that fragmented pilots hinder scaling and proposed government-funded “scaling hubs” in India and Africa to aggregate efforts, reduce duplication and accelerate country-level impact [84-99]. Esther Dweck described Brazil’s reforms through its Ministry of Management and Innovation, focusing on outcome-oriented procurement, strengthening digital infrastructure and a national digital ID platform (gov.br) to support AI services [128-136][137-140][144-148]. She detailed the INSPIRE program, which creates new institutional arrangements, promotes data sovereignty, builds AI platforms for education, health and agriculture, and runs a four-track training scheme for civil servants to build digital and AI capacity [199-207][208-227]. On safety, Trevor stressed the need for auditable AI, especially in health, noting Anthropic’s work on model transparency that lets clinicians trace recommendations [274-282]. Esther added that political-economic challenges such as digital sovereignty and age-verification for online safety are being addressed through legislation and local data control initiatives in Brazil [286-314]. The discussion concluded that robust digital public infrastructure (DPI) is essential for scaling AI, and by 2030 it may evolve into “digital public intelligence,” reflecting the collective confidence in achieving safe, inclusive AI impact at scale [315-317][30-32][15].


Keypoints


Major discussion points


Diffusion pathways as a fast-track to AI impact – Nandan Nilekani described how a farmer-app that took nine months to roll out in Maharashtra was launched in Ethiopia in three months and for dairy farmers in three weeks, illustrating the speed gains from reusable “pathways” and announcing an ambition to create 100 diffusion pathways by 2030 with a global coalition of partners such as Anthropic, Google, the Gates Foundation and UNDP [4-12][15-20][22-27].


Key ingredients for AI diffusion at scale – Irina Ghose emphasized that successful diffusion requires (1) local language/context, (2) embedding AI into existing daily workflows, and (3) an iterative, “AI-first” mindset that continuously engages users, citing examples from Indian language support and low-code tools [60-62][64-72].


Barriers to scaling pilots and the need for coordinated hubs – Trevor Mundeli highlighted the problem of fragmented pilots across ministries and sectors, proposing “scaling hubs” in partnership with governments (e.g., Rwanda, Nigeria, Senegal) to aggregate funding, expertise, and DPI infrastructure, thereby turning pilots into sustainable, population-scale services [88-99][100-103].


Public-sector reforms to enable AI adoption – Esther Dweck outlined three systemic changes needed within the state: (1) innovation-oriented procurement that tolerates risk and failure, (2) robust digital infrastructure (e.g., national digital ID and service platform), and (3) data-governance reforms (chief data officers, sovereign data policies) to break silos and support AI-driven public services [124-138][144-151][158-162].


Technical standards for plug-and-play AI – Irina later introduced the Model Context Protocol (MCP) as a universal “adapter” that lets AI models be built once and reused across domains (agriculture, health, etc.), analogous to how UPI standardized digital payments [250-254].


Overall purpose / goal of the discussion


The panel aimed to chart a collaborative roadmap for building, publishing, and scaling digital public infrastructure (DPI) for AI worldwide. By sharing concrete rollout examples, identifying systemic obstacles, and proposing both governance reforms and technical standards, the participants sought to mobilize governments, foundations, and technology firms around the “100 diffusion pathways by 2030” vision-ensuring AI is deployed safely, inclusively, and at population scale.


Overall tone and its evolution


The conversation began with an optimistic, celebratory tone, highlighting rapid successes and ambitious targets. As the dialogue progressed, it shifted to a pragmatic, problem-solving tone, acknowledging fragmentation, procurement hurdles, and safety concerns. By the end, the tone returned to hopeful and forward-looking, emphasizing concrete solutions (scaling hubs, MCP, policy reforms) and a collective call to action. Throughout, the atmosphere remained collaborative and constructive.


Speakers

Nandan Nilekani


Area of expertise: Digital public infrastructure, AI for agriculture and public good


Role/Title: Co-founder and Chairman of Infosys Technologies Limited (as noted in external sources) [S13][S14]


Speaker 1


Area of expertise: Event hosting/moderation (no specific domain)


Role/Title: Event host / moderator introducing the panel [S4][S6]


Shankar Maruwada


Area of expertise: AI diffusion pathways, public policy, panel moderation


Role/Title: Panel moderator


Irina Ghose


Area of expertise: AI model development, responsible AI, language localization


Role/Title: Managing Director, Anthropic India [S16][S17]


Trevor Mundeli


Area of expertise: Global health, AI scaling, philanthropic funding


Role/Title: President, Bill & Melinda Gates Foundation (global health focus) [S10]


Esther Dweck


Area of expertise: Public sector innovation, digital sovereignty, AI governance


Role/Title: Minister of Management and Innovation in Public Services, Brazil [S1][S2]


Additional speakers:


Om Birlaji – Speaker of Parliament of India (Chief Guest)


Martin Chongungji – Secretary General, Inter-Parliamentary Union (IPU)


Laszlo Z – Deputy Speaker, Parliament of Hungary


Dr. Chinmay Pandya – Representative, All World Gayatri Parivar


Ms. Jimena – (Affiliation not specified in transcript)


Dario Amadei – CEO, Anthropic (referenced in discussion)


Full session reportComprehensive analysis and detailed insights

The session opened with Nandan Nilekani outlining a practical illustration of how “diffusion pathways” can accelerate AI-driven public services. He described a farmer-support app that required nine months to launch in Maharashtra, was replicated in Ethiopia in three months, and then adapted for dairy farmers by Amul in just three weeks [4-12]. From this experience he argued that lived implementation dramatically shortens rollout times and coined the term “pathways” for the repeatable routes that enable others to reach the same point more quickly [13-15]. He announced an ambition to develop 100 diffusion pathways worldwide by 2030, backed by a newly formed global coalition that includes Anthropic, Google, the Gates Foundation, UNDP and other partners, and invited any organisation to join [19-27][28-32]. Nandan also referenced “Blue Dot”, an initiative aimed at creating job opportunities through AI-enabled platforms [??-??], recalled the earlier target of “50 in 5” (50 countries adopting AI pathways within five years) as a benchmark for the new ambition [??-??], and likened the 100-pathway target to the previous “15.5 DPI” goal, underscoring continuity in scaling objectives [??-??].


Shankar Maruwada expanded the metaphor, likening diffusion to the spread of know-how, trust and institutional capability rather than mere awareness [40-46]. He described pathways as shared “rails” that compress learning curves, lower costs and reduce risk, thereby allowing AI to be used safely across societies [44-47]. He later noted that, like the Unified Payments Interface (UPI), technology must become “boring” and invisible to users for true diffusion to occur [247-254].


Irina Ghose identified three non-technical prerequisites for AI to move from pilot to population scale: localisation to the user’s language, seamless embedding into existing daily workflows, and an iterative “AI-first” mindset that keeps the technology relevant [60-62]. She illustrated how low-code tools such as Anthropic’s Co-Work enable non-technical users-teachers, health workers, small-business owners-to adopt AI without writing code [64-73]. To further reduce friction, she introduced the Model Context Protocol (MCP), a universal “adapter” that allows AI models to be built once and then plugged into multiple domains, much as UPI standardised digital payments [250-254]. Irina later positioned MCP as a “universal language” that lets AI models access tools and data across sectors without bespoke re-engineering [250-254].


Trevor Mundeli highlighted a systemic obstacle: the proliferation of fragmented pilots across ministries and funders, which hampers scaling [88-99]. To counter this, he proposed government-funded “scaling hubs” in countries such as Rwanda, Nigeria, Senegal and Kenya that would aggregate funding, expertise and DPI (Digital Public Infrastructure) infrastructure, acting as centres of excellence that channel diffusion toward national-level impact [84-99][100-103]. He argued that without such hubs, the “pilotitis” phenomenon would persist, preventing sustainable, population-scale outcomes.


Esther Dweck described Brazil’s parallel reforms aimed at creating the institutional backbone required for AI diffusion. She called for a shift in public-sector procurement from a focus on lowest price and risk to an outcome-oriented, failure-tolerant approach that encourages innovation and involves suppliers [128-138]. She also stressed the importance of robust digital infrastructure-specifically a national digital ID and the gov.br service platform-as the foundation for AI-enabled public services [144-148]. Complementary data-governance measures, such as appointing chief data officers, breaking data silos and enacting a new decree on data governance, were presented as essential for trustworthy AI deployment [150-162]. Through the INSPIRE programme, Brazil is creating new institutional arrangements, promoting data sovereignty, and running a four-track training scheme for civil servants to build AI and digital skills [199-227].


Safety and auditability emerged as a counterweight to the drive for speed. Trevor warned that high-stakes applications, especially in health, require transparent, auditable systems; black-box recommendations are insufficient for clinicians who need to trace the reasoning behind a suggestion [274-282]. He praised Anthropic’s research on model interpretability and suggested that India’s DPI stack could serve as a cautious testbed for safe AI introduction [267-273].


Esther added a political-economic dimension, noting Brazil’s pursuit of digital sovereignty through data localisation, resident clouds and supplier negotiations [286-304]. She cited recent legislation mandating age verification for online services, explaining how the government is seeking privacy-preserving verification methods that protect children without invasive surveillance [308-314].


In concluding remarks, Shankar Maruwada summed up the vision: by 2030 the world should move from “digital public infrastructure” to “digital public intelligence”, reflecting a mature ecosystem where AI is embedded, safe and universally accessible [315-317].


Collectively, the panel underscored that building open, safe, and locally-adapted diffusion pathways-supported by institutional reforms, standardised protocols such as MCP, and responsible governance-is essential to realise the 100-pathway AI ambition by 2030 [315-317].


Session transcriptComplete transcript of the session
Nandan Nilekani

bought which farmers use and millions of farmers today, 2 .5 million farmers have downloaded this app. And this was built to make sure that farmers have access to the best information about access to prices, access to weather information and so on. And it’s very sophisticated. It took nine months to get this going in Maharashtra. But we learned a lot about how to do these things. And the next implementation was done in Ethiopia. So in Africa, and Ethiopia did the same thing in three months. So essentially what took us nine months the first time around took us three months. And recently, at the request of the Prime Minister, Amul implemented the whole thing. And Amul implemented it for cows and bought for dairy farmers to understand about the cows and whether they’re lactating or whether they’re, you know, milk and so on.

And that was done in three weeks. So I think you went from nine months to three months to three weeks. So what is the message in that is that if you get the lived experience of implementing these kind of systems for public good, you can actually dramatically reduce the time in which you can do that. And we call these ways of reaching the goal faster, we call them as pathways, because once you have a pathway, then you can get, somebody else can get to the same point quicker. And just like we had this notion that we’ll have 50 in five, 50 countries in five years, we are also now setting an ambitious goal for doing 100 diffusion pathways by 2030.

In other words, by 2030, all of us together across the world will develop these pathways to diffuse the use of AI in a positive way to help farmers, improve the life of young kids, allow people to get jobs through something called Blue Dot. There are so many things going on, but all of them are designed to be effective. to improve and make better people’s lives, can meet the aspirations in a very inclusive way so that everybody is in, nobody is left out. And so we announced a partnership. We announced a coalition of this, of 100 diffusion pathways by 2030. We announced that yesterday or day before yesterday. And we have a global coalition. Anthropic is there. Google is there.

Gates Foundation is there. UNDP is there. A whole host of people are there. And it’s a very open, it’s a big tent. Anybody can join the coalition. But our goal is all of us work together to very, in a focused manner, develop these pathways of diffusion of different kinds of positive AI use cases and then actually make it happen in countries around the world. So just like 15 .5 was a DPI goal, 100 diffusion pathways by 2030 is the AI goal we have. And we are confident that all of us collectively can get there. So I think this is important. I think it’s strategic for the world that we show the good use of AI, and it’s strategic that all of us work together to do that.

Thank you very much.

Speaker 1

Thank you so much, Mr. Nandan. At this point, I would love to invite our panelists up to the stage. We’ll start by taking a quick group photograph together and then begin the discussion. So let me invite Minister Esther Dweck, Mr. Trevor Mundell, Ms. Reena Ghosh, and Mr. Shankar Maruwada, accompanied by Nandan, to be on the stage for a quick group photograph. Thank you. Let me now hand it over to Shankar Maruwala, who will moderate us to the next panel.

Shankar Maruwada

Good afternoon. We have an exciting panel discussion ahead. Let me start off with where Nandan stopped. Hundred pathways. What are these pathways? These are diffusion pathways to AI impact safely and at scale. Let me provide a bit of background. France invented better than Britain in the first industrial revolution yet Britain won that Britain in turn out invented US in steel, Germany out invented US in chemistry yet it’s the US that won the second industrial revolution what was the crucial thing it was not better invention or even innovation the missing ingredient was diffusion which the United States of America did much better diffusing the benefits and the impact of this technology throughout the economy and the society when we say diffusion we don’t mean awareness or access diffusion as Nandan described is the spread of know -how, trust and institutional capability that allows organizations to adopt AI safely and sustainably as he explained Maharashtra was the pioneer to do this in India it’s like Sir Edmund Hillary climbing Mount Everest for the first time he inspires he creates a pathway for others to follow and it would be rather stupid if after he came back he said I am not sharing this with others the pathway I created I have removed it so now you guys find your own pathway the societies that create such pathways allow a whole lot of others to prosper to make progress to create impact inclusively and equitably that is the when Nandan talked about diffusion hundred pathways these are the hundred diffusion pathways across sectors countries continents some are some may be led by proprietary models some may be led by sovereign efforts some may not be it may differ It’s the choice of the AI adopter to decide which pathway works best for them.

So the diffusion infrastructure we are talking about creating isn’t a platform app or model. It’s shared rails that compress learning curves, cost and risk. So that AI can be used by all of society for all of humanity. With that, I would like to begin the panel discussion. Irina, from the model builder’s perspective, what needs to be true for AI to be deployable at population scale? Not just impressive pilots, especially in high -stake public systems. What needs to happen?

Irina Ghose

Thank you so much, Shankar. And absolutely a pleasure and honor to be here with all of you. Thank you so much. The way I think about it is AI deployment would seldom, if ever, have any roadblocks because of a complexity in the model or the performance. The only reason it fails to gain scale is because the perception in our mind about the complexity. And one of the things that we really feel is that you have to be all in, first yourself, diffuse it to people around you to make it happen. Now, if you think about it, in a pilot, you’ve got experts doing it, you’ve got guardrails, you’ve got the intensity of people, and you’ve got a select group.

Now, when that kind of goes and spreads out, you’ve got a teacher in Bihar kind of implementing it, you’ve got a health worker in Coimbatore, you’ve got a small business leader in Indore doing it, who are not into ML, but for them, AI will start having significance when it stops being a scientific tool to something which is as intuitive for them. So three things which come into play. The first one is that for diffusion, it needs to be contextual to the local language that you speak. Second, it needs to be in the workflow of what you’re doing every day and you don’t need to do net new things. And the third is to be, you have to be iterative and be at it to make it happen.

And I’ll give you a small example as to how diffusion is happening. First of all, Shankar, really honored to have worked with X -Step to make it diffused across so many realms of life. And at Anthropic also we said that it’s not a technology for the sake of the technology only in the hands of developers and builders. We found that in India, India happens to be the second largest user base of cloud outside the US. So a big round of applause to all of us out here for making that happen. And what we also felt is that when we are building tools, one of the tools you might have heard is co -work, which earlier used to be done a lot by developers.

But now, people who are information workers or who are just thinking as to how to solve things. The idea is that you do not have to develop code, read a lot of intense things. You can make the tool work for itself. So in my mind, diffusion really means, first, how do I think that everything that I do, I have to be AI first. Second, the ecosystem being in India around myself, I enthuse everybody. And third, how am I giving back to everybody in the last mile to make it happen?

Shankar Maruwada

Fantastic. One of the things I liked about what Anthropic CEO Dario Amadei said is very soon, imagine a country with a whole bunch of geniuses living in data centers. What will that country do? Think about it. But till we reach there, and Dario says in two, three years, but till we reach there, Trevor, as president of Gates Foundation looking at global health, you are dealing with a situation where you’ve seen a whole bunch of, you’ve seen a whole bunch of AI pilots. not too many of them have scaled. From your experience, what separates pilots from systems that have scaled and become institutional? What separates an experiment from a scaled, institutional, sustainable impact?

Trevor Mundeli

Thank you, Shanka. And thank you for the invitation to be on this good panel. And also for the overview you gave me a few days ago of the very good work you’re doing at XSTEP. I learned about Open AgriNet and where that has made progress. But on this issue of scaling of AI, I had an opportunity to, this morning, sit down with the heads of entities which we call scaling hubs. There are two of them here in India, and there are three, soon to be four, in Africa. And there’s also a pan -African venture called Smart Africa. And you might say, well, what are these scaling hubs? So the idea is that we would support a partnership with the governments now in Rwanda, Nigeria, Senegal, and soon to be Kenya, wherein we place funding that the government can use to take the pilots that are out there and to really push them to large scale.

And why would we need a hub like this to do that? Well, one of the big barriers that we are currently seeing is the fragmentation that is occurring out there in terms of many, many ventures, some that we fund, other funders, everything with very good intent. Let’s do a small pilot. Let’s quickly do something over here. Thousands of them occurring out there. You take it at a government level. They have people approaching the Ministry of Agriculture, the Ministry of Education, the Ministry of Health, Ministry of Finance. all of them with different groups and on the DPI front, all of them trying to put in place the necessary DPI infrastructure to support their pilots. And now this fragmentation which is occurring over there, which I think is a big inhibitor of scaling to real population scale that we need.

So we are going to invest in these hubs that can be points of aggregation. We don’t want to inhibit diffusion. People have the idea of diffusion as a more random process which goes anywhere, and there’s something good about that. But if we can channel the diffusion into these centers of excellence, I think at the country level, the feedback that we’ve had from the governments is that that is a way that we are really going to get to scale more rapidly. Thank you.

Shankar Maruwada

Excellent point. Excellent point, Trevor. And I think you brought out the inherent stress in the phrase diffusion pathways. Diffusion pathways. Definition is everywhere, right? Pathways by definition is fixed. So it’s how do you spread. a technology in certain fixed pathways towards certain impact. It is indeed a stress. I believe that stress needs to be there because we are talking of the stress of safe AI impact at scale. But it is indeed a challenge, and together we have to solve it very quickly. I want to talk a bit about Minister Esther Dweck’s ministry, MGI, or the Ministry of Management and Innovation. Isn’t that a cool concept? The government of Brazil has a minister and a ministry looking after the idea of innovation and management.

They are collaborating very closely with India on a range of issues, and it’s my honor, Your Excellency, to have you here. Minister, I want to ask you a question. Scale efforts, diffusion. A lot of times fail inside government, not because of technology. But because of procurement process change and accountability, what has to change inside the state for AI to move from pilots to durable public services?

Esther Dweck

Thank you, Shankar. Thank you for inviting me and also for the partnership that we have with India. And Brazil is looking for this partnership with India because of scale. If anything can be scaled up in India, it can be in Brazil because compared to India, we are not such a big country. But compared to many other countries, very large. So for us, very important, this partnership. But when you talk about the problem inside the state, our ministry was created. The whole name is Ministry of Management and Innovation in Public Service. So we are focusing on innovation inside the public services. And we created a special secretary for state transformation because we saw that the state had to be transformed in order to actually be able to have innovation.

Because if we stand with the same way of doing procurement, actually we won’t be. We won’t be able to. do it. So we think that we need, in terms of AI, we need to transform the state in three main areas. The first one is procurement, for sure. And any kind of innovation procurement needs to be changed. So also the infrastructure, especially the digital infrastructure, and of course the governments. And when I talk about the procurement process, usually people are looking for the lowest price, lowest risk, and usually civil servants are very afraid of doing procurement because the auditing bodies are trying to look if they’re doing something wrong. So they usually try to go for the lowest risk possible.

And this is what prevents innovation inside the government, especially because innovation comes with errors. We know that any innovation might come to error. And if the civil servant cannot make any mistakes, then we never innovate. So one of the things that we found out when we’re trying to ask for how to do innovation procurement in the government, the first thing people say, I’m afraid of doing any mistakes, then the auditing body will come after me and then I won’t be able to be a civil servant. So what have we done is to change the mindset of the procurement process. Instead of more process -oriented, we are looking for a more policy -oriented and looking at the outcomes and not only the lowest price thing.

And with many other ministries, we are discussing how to actually build that culture of innovation procurement with this idea that it must fail. And you can also interact with the one you’re buying off. Because, of course, you’re buying something that doesn’t exist. How do you explain to them what you need? So there are a lot of things that you have to change in terms of procurement in order to actually be able to do AI. And, of course, the second thing is the digital infrastructure. As, of course, as Nandan has said before, Brazil, since 2023, when we came here for the G20 in India, we brought this idea of DPI to Brazil very… as something very strong.

Thank you. and we already know that we had something that could be called the DPI, but we didn’t know the concept before. And one of the things that was very important for us was our digital ID and our platform for services, a digital platform for service, which both called gov .br. And based on this platform, you were able to, what we are discussing now in terms of optimizing, but also in having more personalized services, knowing the people, if you know the citizen, we will be able to provide them specialized service, and we’re doing AI to do this, how to actually specialize service, what the people actually need. So I think using this, having a good DPI infrastructure, especially in terms of identification, and be able to also, of course, to have a better data governance.

That’s the third thing I would like to say is the governance inside the state. When we launched our plan for AI, and this morning, today, we had a session on the Brazilian AI plan. And the first thing the president said is that we need our database. He said we need the Brazilian database. We cannot have silos anymore. We cannot have this minister saying, no, this is my data. No one can access this data. So we have to do it, of course, in a private, preserving privacy in a security way. So we discussed all the data governance. We’re about to launch a new decree on data governance. Having every minister to have a chief data officer, someone who actually knows the data, knows how to use the data.

So we are actually looking at these things in order to access from the state to be able to innovate into this AI. Thank you. That’s it. Thank you.

Shankar Maruwada

Wonderful. Thank you. Irina, you’ve been in the IT space for three decades. You’ve seen the Internet thing boom, bust, and now you’re seeing AI. From your vast experience, what is the most common failure mode when AI moves from pilots to everyday workforce, everyday? And what kind of safety infrastructure actually prevent?

Irina Ghose

yeah I think one of the things that we have to remember is that the failure never happens with a big bang it just slowly dies because people just stop reducing the level of interaction they have gradually and you suddenly realize that it’s not relevant anymore so what really needs to happen that you need to keep it in a way that people use it daily and use it in the way that is contextual for each of them. For example one of the reasons why it might fail is because the data sets are speaking across to a country of a different nature which is kind of setting benchmarks in banking and financial systems which is not the same way where in agriculture is the biggest thing that we require hence collecting data for Indian languages nuancing it by say legal, by agriculture by what people are speaking across in that dialect in that language, this is very critical so if I want to look at three things that needs to happen, first of all keep it contextual to the domain, micro domain in which it is required at Anthropic we have kind of worked closely to ensure that we now have Indic language availability for 10 Indian languages from Hindi to Malayalam to Gujarati to Urdu and it’s available in the latest models and it is incrementally improving day by day and the last part I would say is that ensuring that whatever you are doing the ROI that we look at should be if I invest in a language say Bengali how many net new use cases have been opened up because of that and how many more people have got the benefit of that and I think the work that say we are doing with Aikstep and thanks to the fields employed education, healthcare, everything that’s the litmus test that we should be measuring ourselves on

Shankar Maruwada

I want to ask a question to the audience by raising hands how many of you use UPI keep your hands up if you know how UPI works, what’s the protocol behind it what’s the technology behind it hands are steadily coming down this is my point, we don’t care about technology as long as it works, for something to work at population scale technology has to be boring technology has to be invisible till the time it is not, it has not diffused, it is just some magic mystery thing that we all are stuck with figuring out what to do it’s a long journey from technology as magic to technology as normal boring in fact this wise old man once told me when you stop thinking of something as technology that’s when it has diffused 500 years ago this was magical ocular technology.

It allowed someone to see. Now we don’t think of it as technology. A day will come when we don’t think of AI as technology. That is the day we can say that AI has diffused through all of society. We have some way to go for that. Trevor, when you hear of things like Open AgriNet, some exciting work happening, what makes you think that fees like infrastructure versus yet another project that is going to the path of pilotitis, death by pilots?

Trevor Mundeli

Well, I do look a little bit with envy at Open AgriNet. Having looked across the work that the foundation does in agriculture and in health, traditionally the narrative has been how fortunate those health folks are because there’s such huge funding into the health areas, such huge investment in research, in genomics, in human health. and much less on plant genomics, which admittedly is potentially more complex, the clinical trial infrastructures for developing new products on the human health side versus on the agriculture side. But now we come to AI, and I have to say I look at OpenAgronet, and I think that the agriculture community is ahead of human health in terms of the implementation of a system which is personally useful to a farmer smallholder farmer, for instance, being able to get the information they need, being able to determine what crop disease they have to deal with or a disease in their cattle and what the weather is going to be and how they can maximize the finances in their small farm.

All of these types of things I would love to see in the health space, a personal health assistant. In low – and middle -income countries, so many people are not very close to a tertiary hospital. And they may be 10, 20 miles even from a primary health care clinic. Can we not provide for them with a system that can personally provide them with the information that they need in a safe way? And I think Open AgriNet really puts those components of infrastructure together. The way that it’s modular, the way that you can adapt it to the local circumstances, it’s in many ways exactly what we need in that personal health side of the picture. So I only have some envy, but I hope we can duplicate that on the health side.

Thank you.

Shankar Maruwada

Thank you, Trevor. Open AgriNet is just a group of organizations coming together, collaborating, as Trevor said, each bringing in one piece of the puzzle so that together we can create those diffusion pathways. And as Nandan said, that is what allows us to take something from Maharashtra, which took nine months, to Ethiopia in three months. Back to India in three weeks. from agriculture to livestock, from India to Ethiopia, from Asia to Africa and back. That is the exciting possibility that India has been in the journey of for the last 15 years, what we call as DPI. The thing about DPI is when you start with a strong use case in mind, as Arina and others have said, you harness technology, so technology becomes a good slave to a very powerful cause.

Then you take advantage of rapidly evolving technology. Minister Dweck, if you designed a national diffusion pathway for one public service, what would you prioritize first, institutions, incentives, data readiness or governance?

Esther Dweck

Well, it’s difficult to choose only one thing, I guess. Maybe this perspective from management, you’re always looking for some kind of a systemic approach, trying to look at all these things. Together. Together. And actually, we recently have launched a program, an R &D for AI in Brazil. It’s called INSPIRE in English, but in Portuguese means BREATHE, INSPIRE, but the same acronym, which is AI for Public Service with Innovation, Responsibility, and Ethics. And it has this systemic approach inside of it. Because the first thing, we create this new institutional arrangement. It’s not new, but we had in this R &D project, we have the government, of course, we have some state -owned companies, we have some private companies, and our innovation ecosystem in Brazil, all of them bringing together in order to help the government to have new AI platforms.

Because when we, although we’re already using AI in Brazil, we saw that we have a lot of lack of technological expertise and lack of financial support as well. So we’re trying to create this platform where we can actually offer many bodies of the government different solutions that can be used in many different areas, as you said. As I was saying before. So this idea, well, first thing we are discussing to have the data more sovereignty on the data and how to actually use better, but also for the data to be ready to be used. So one thing I was explaining before. So using AI to help to improve our data set. So it’s going both ways.

Another thing is also in the governance perspective, of course, we’re creating, as I mentioned, this shared tools and common practices and trying to share how, and specifically in this project, we’re creating this generative AI platform, and we’re trying to apply to different solutions. So recently, at the end of last year, we have this university enrollment exam for people finishing high school. So we created this whole complete, for them to know when they’re finishing school, what they’re going to do. Are they going to the job market? Are they going? Enroll school to enroll university? How to apply? What’s the best thing for them? So using AI to help them to actually decide this. And they’re doing the same thing for health care, for.

agriculture sector as well. So we’re looking at all these things. And, of course, in capacity building. So we are a lot training civil servants. We have four trails, actually, for people who actually are the managers, the top managers, for IT experts, for people controlling data, and for regular civil servants. Because one thing, when we’re talking about state transformation, we thought the one thing you have to train and to change, of course, is the civil servants. And nowadays they have to have a digital mind. And some of them have been there for many years. They didn’t have the digital capabilities. So we’re training all of them in digital capabilities and specifically on AI as well in order to think how to use this new technology in their regular life in order to improve civil service.

So I think it’s a more systemic approach there.

Shankar Maruwada

Pathways are like digital rails. What should model developers focus on so that AI can plug into these pathways safely across sectors and countries?

Irina Ghose

Very interesting. And I’ll just try to kind of paint the picture by giving a context. Now, think about it. We’re talking a lot about agriculture. It has the last mile. Now, if you were to solve for that farmer day in and day out, there’ll be various kinds of work that they have to do. Look at what is the weather conditions, one source of data. Look at how the crop yield, et cetera, is performing in other source of data. The market prices in other source of data. Whatever has to be done across for reaping and sowing. So these kinds of data, if they want to infuse, anybody wants to infuse AI on top of that, and if you build it every time, it is so cumbersome.

Now, if you kind of do the same thing that, Nandan, you’ve been talking about, at one point of time, all of us are different. We’re different. We’re different. We’re different. We’re different. We’re different. We’re different. We’re different. universal adapter came, it took it away. We all use UPI for digital payments. Do we know anything to do with the technology behind it? Whether it’s earned, whatever is coming across as the small micropayment, we have no idea. So one of the things here to be done is have a universal language which accesses the tools as well as the data. So we came out with this concept in Anthropic in 2024 called the model context protocol. And very simplistically put, I think of MCP as to AI was say what UPI was to payments.

And in effect, what it really does is you develop things once and you make it MCP ready. And anything else that you want to do it further, you do not have to keep on writing again and again. So all the cases of agriculture, healthcare, anything else put together, that can happen seamlessly. Why does it matter for India? There’s a lot of data which already exists in hell. in education, in various ways that citizen services are going across, and that is a rich level of data. So if we kind of make this data AI ready, use the tools which are going across, then the case of diffusion and that accountability of everybody coming together will be that much more quicker.

Shankar Maruwada

Excellent. A lot of people who deploy AI, they have an old notion that it’s like normal software. You buy great software, it is perfected, and you deploy, and you can close that and go away. In AI, that is just the start, because as you use it, data comes in. The data gets better, the models get better. With better models, you provide better services, usage increases, more usage, more data. This cycle, and while it is happening, the models improve, the data improve, so for a lot of adopters, once they go beyond procurement how do you continuously invest to upgrade and evolve? That’s again a very important question. So when we talk of 100 diffusion pathways these are 100 diffusion pathways to safe AI impact at scale which creates a second stress and I’ll come to you on that Trevor.

When lives are at stake where do you draw the line between speed 100 pathways to 2030 and safety and coming from health safety means literally lives, right?

Trevor Mundeli

Yes Shankar and there are a lot of lives at stake and I feel the urgency. Every year we don’t have the next generation of malaria vaccines we’re seeing hundreds of thousands of young children dying. Every year we don’t have a personalized education coach for every child no matter where they are. we see a tremendous amount of human potential wasted. So there is this urgency to get things done and that might make one think very carefully on the safety front and it is that safety issue where people are in the health area saying we need to take a step back, we need to look carefully at the frameworks before we just jump in with like the application I talked about, the self -application, how would that be gated, how would that be guarded.

I do think that because of the excellence of the DPI stack here in India and because of the thousands of application efforts I see, you are going to probe those frameworks for the safe introduction probably first in a context which is, as Nandan was mentioning, the frugal innovation that will be relevant across lower middle income countries and actually beyond. So I do think that we are very much looking at India as a safe introduction. The foundry of AI application. and we want to see those frameworks whereby we can safely introduce the technology. In terms of the technology itself, just having a type of black box system that gives a health recommendation is almost never adequate, almost never satisfactory.

These systems need to be auditable. And I have to say that Anthropic has made quite a lot of progress in their research on how are these concepts, how are these recommendations actually represented in the model. People want to be able to audit that. They don’t just want something that comes out of nowhere. If you have a human clinician that makes an error, you can talk to that person. You can say, well, where did this, why did you think this was the case when you made a misdiagnosis over here? Was it because you didn’t elicit the right question from the patient or you transcribed incorrectly? And that is the kind of transparency that we actually demand of the AI systems at the end of the day.

So I think that… But between the work going on here in India and some of that transparency research, we can get there.

Shankar Maruwada

Thank you, Trevor. Minister Dweck, as you’re thinking of implementing AI solutions at scale, what is the hardest political or economic challenge, and what are some tips on how one should deal with it?

Esther Dweck

Okay. I think it’s kind of a political economy issue now, I think, in Brazil we are looking for. Of course, one thing is the workforce problem, because we may be going to this utopia that no human need to work anymore, and the machines work for us. So how actually create, how divide this wealth in order to come from these machines working? But there’s one point. But more concerning in our current period now in Brazil is about digital sovereignty. Of course, very few countries, maybe only two countries in the world, will be totally digital sovereign right now. But I think we have to. We have to increase our digital sovereignty in terms of being able to.

have our services and be able to operate it, be able to know where our data is, to know how we’ll be able to continue with our services to our populations. So we are discussing a lot of this in Brazil, how to increase our level of digital sovereignty. Of course, we know we’re probably not in a very, in a few years, be totally digital sovereign, but at least we’re to increase. And we’re actually working with our suppliers in order for them to offer us more sovereignty or at least some security that we not have any discontinuity. So I think using the state capacity and using the state procurement purchasing power, it’s very important to do this.

And we’re actually using it in order to talk to our buyers. And we discussed this sovereignty in three levels, in the data level. And for this, we’re bringing back the data to Brazil. We’re trying to have… We have two, as I mentioned before, two federal, state -owned companies that are actually having resident clouds within our companies to know where the data is, but only know where the data is not enough. So we are increasing our operational access to the data. And also, I think the third level is why you’re using technology, something that we’ve been discussing a lot here. And it’s not directly related to AI, but it’s related to digital services. I think one thing that we’re doing together here in India, using a technology that was developed here, a verifiable convention, which was very important for us, we are using right now in two pilot projects yet, but we want to scale it up.

One is related to rural credit, but the second one is related to something that I think the whole world is discussing, how to protect child online. So now in Brazil, we passed a law last year, which is a very important law. It was very quick to pass. After one of the digital influencers showed what was happening to children in the Internet, especially on social media, and we passed the bill and it said by 17 March, you have to know what age the person who’s accessing the Internet is. So how to do this in a way that you protect the privacy? We don’t actually know what people are using. So a lot of things are discussed and we’re trying to do this verified recognition in order to have this age verification in a very simple way, very easy for people and for people not to be afraid that the government is actually looking at the Internet.

So I think this is the way to make things that are actually useful and important to protect our citizens but also to provide them with very good services.

Shankar Maruwada

Thank you. Today’s topic was building publish and scale digital public infrastructure for AI. By 2030, when we would have made a lot of progress on that, we would stop calling DPI digital public infrastructure and we’ll start calling it digital public. intelligence. With that, a big thank you to all my panelists and to the audience. Thank you.

Irina Ghose

Thank you. Shankar, if I can just request you to send a token of appreciation to the panel. Thank you. Now the next session is about to start on a very unique topic, AI for Democracy. So we request all the audience here to remain seated. A very wonderful topic, AI for Democracy, and we are very blessed that today we have with us Honorable Chief Guest, Mr. Om Birlaji, Speaker of Parliament of India, Mr. Martin Chongungji, Secretary General, IPU, Mr. Laszlo Z, Deputy Speaker, Parliament of Hungary, Dr. Chinmay Pandya from All World Gayatri Parivar, Ms. Jimena.

Related ResourcesKnowledge base sources related to the discussion topics (12)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“The farmer‑support app took nine months to launch in Maharashtra, was replicated in Ethiopia in three months, and then adapted for dairy farmers by Amul in just three weeks.”

The knowledge base states the app took nine months to build [S3] and later notes the rollout was compressed to three weeks after an intermediate three-month period [S24], confirming the reported timeline.

Additional Contextmedium

“Diffusion is likened to the spread of know‑how, trust and institutional capability rather than mere awareness.”

S30 describes diffusion as “walking the path” with domain knowledge and replaceability, emphasizing practical capability over simple deployment, which adds nuance to the metaphor presented.

Additional Contextmedium

“Like the Unified Payments Interface (UPI), technology must become “boring” and invisible to users for true diffusion to occur.”

S79 uses the UPI analogy to illustrate how a technology succeeds by becoming a ubiquitous, low‑friction infrastructure, providing supporting context for the claim about “boring” diffusion.

External Sources (80)
S1
A Digital Future for All (morning sessions) — – Esther Dweck (Minister, Brazil) discussed DPI for efficient government services, financial inclusion, and environmenta…
S2
(Interactive Dialogue 3) Summit of the Future – General Assembly, 79th session — – Esther Dweck (Minister of Management and Innovation in Public Services of Brazil)
S3
Building Population-Scale Digital Public Infrastructure for AI — – Esther Dweck- Irina Ghose – Irina Ghose- Esther Dweck – Nandan Nilekani- Trevor Mundeli- Esther Dweck
S4
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S5
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S6
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S7
https://dig.watch/event/india-ai-impact-summit-2026/ai-meets-agriculture-building-food-security-and-climate-resilien — Dr. Chaturvedi leads our national effort in agriculture and farmer’s welfare. Mr. Johannes Jett, he is the Regional Vice…
S8
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-agriculture-scaling-intelegence-for-food-and-climate-resiliance — So we are happy to have support and assistance from MSSRF in that direction. My final question is to Mr. Shankar Maruwad…
S9
AI Meets Agriculture Building Food Security and Climate Resilien — – Dr. Soumya Swaminathan- Shankar Maruwada Dr. Swaminathan advocates for a cautious, medical research-style evaluation …
S10
Transforming Health Systems with AI From Lab to Last Mile — -Trevor Mundel: Dr. Dr. Trevor Mundel (medical degree and Ph.D. in mathematics), Rhodes Scholar, extensive experience in…
S11
https://dig.watch/event/india-ai-impact-summit-2026/transforming-health-systems-with-ai-from-lab-to-last-mile — And welcome. And… And her background is also in this both biomedical field, science innovation field, but also has ext…
S12
Building Population-Scale Digital Public Infrastructure for AI — – Nandan Nilekani- Trevor Mundeli – Trevor Mundeli- Esther Dweck
S13
Keynote-Rishad Premji — -Mr. Nandan Nilekani: Role/Title: Not specified; Area of expertise: Artificial intelligence (described as pioneer and th…
S14
High Level Session 2: Digital Public Goods and Global Digital Cooperation — – **Nandan Nilekani** – Co-founder and chairman of Infosys Technologies Limited (participated online) Karianne Tung, Ve…
S15
https://dig.watch/event/india-ai-impact-summit-2026/fireside-conversation-01 — Thank you so much, Mr. Sikka, for your profound and very interesting remarks. And of course, your work at VNI also exemp…
S16
https://dig.watch/event/india-ai-impact-summit-2026/regulating-open-data_-principles-challenges-and-opportunities — Thank you so much, Vedashree. That was very concise and even compelling. Especially coming from a regulatory standpoint….
S17
Keynote-Dario Amodei — – Irina Ghos: Managing Director for Anthropic India, has three decades of experience building businesses in India (menti…
S18
Building Population-Scale Digital Public Infrastructure for AI — – Irina Ghose- Esther Dweck – Nandan Nilekani- Irina Ghose
S19
Fireside Conversation: 01 — This fireside conversation featured Nandan Nilekani, co-founder of Infosys and architect of India’s Aadhaar system, and …
S20
Building Scalable AI Through Global South Partnerships — Yeah, thank you so much. And you talked about DPI, you talked about the private sector, public coming together. It’s the…
S21
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Abhishek Singh: Thank you for convening this and bringing this very, very important subject at FORC, like how do we bala…
S22
Lightning Talk #173 Artificial Intelligence in Agrotech and Foodtech — The speaker addressed practical challenges in implementing AI solutions for farmers in low-income countries. She stresse…
S23
AI for Social Good Using Technology to Create Real-World Impact — And I think that’s what we’re doing. And to give you another example of how it reduces the complexity, there’s a very in…
S24
https://dig.watch/event/india-ai-impact-summit-2026/building-population-scale-digital-public-infrastructure-for-ai — And I’ll give you a small example as to how diffusion is happening. First of all, Shankar, really honored to have worked…
S25
https://dig.watch/event/india-ai-impact-summit-2026/safe-and-responsible-ai-at-scale-practical-pathways — Now with education, when we are working recently, we realized that LLMs are becoming increasingly good, at least with th…
S26
Safe and Responsible AI at Scale Practical Pathways — “The moment they hit any domain‑specific vocabulary, that’s when they start failing.”[64]. “came up with a solution of u…
S27
Multilingual Internet: a Key Catalyst for Access & Inclusion | IGF 2023 Town Hall #75 — Audience:Hi, my name is Keisuke Kamimura, professor of linguistics and Japanese at Daito Bunka University in Tokyo. And …
S28
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — – Owen Lauder- Michael Brown- Wifredo Fernandez- Austin Marin- Sihao Huang Examples include enterprise knowledge bases,…
S29
All hands on deck to connect the next billions | IGF 2023 WS #198 — To address the digital divide, a whole-of-government and whole-of-society approach is advocated. Initiatives are being i…
S30
Collaborative AI Network – Strengthening Skills Research and Innovation — “We’re talking of AI being a possible DPI, a digital public infrastructure.”[1]. “I think those are aspects which a DPI …
S31
Operationalizing data free flow with trust | IGF 2023 WS #197 — Conversations about data governance should occur in various settings, including normative and legal frameworks. These di…
S32
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion — “The general panel is about policy on the one side, adoption on the other”[52]. “…we have to work downstream and upstr…
S33
Networking Session #37 Mapping the DPI stakeholders? — Ekanayake highlighted that DPI implementation requires government departments to work together in new ways around shared…
S34
AI for agriculture Scaling Intelegence for food and climate resiliance — “We will move from pilots to platforms, from fragmented data to interoperable systems, from experimentation to execution…
S35
Leading in the Digital Era: How can the Public Sector prepare for the AI age? — Modernising government processes is also on Barbados’s agenda, to align with the pace of technological development. The …
S36
Bridging the AI innovation gap — This comment provides a profound reframing of technical standards from bureaucratic requirements to tools of global equi…
S37
Indias AI Leap Policy to Practice with AIP2 — The discussion revealed tensions between global harmonization and local adaptation needs. Adams argued against one-size-…
S38
Anthropic’s MCP aims to transform AI integration — Anthropic hasunveiledthe Model Context Protocol (MCP), an open-source standard designed to improve AI assistant performa…
S39
Building Population-Scale Digital Public Infrastructure for AI — Diffusion requires technology to become contextual, workflow-integrated, and iterative rather than remaining a scientifi…
S40
A bottom-up approach: IG processes and multistakeholderism | IGF 2023 Open Forum #23 — The analysis also highlights the shrinking opportunities for participation in UN processes related to internet governanc…
S41
Research Publication No. 2014-6 March 17, 2014 — – Many of the positions the US government has taken across roles – and both domestically and internationally – are at le…
S42
Harmonizing High-Tech: The role of AI standards as an implementation tool — Philippe’s address emphasised the critical function of public-private partnerships in fostering standardisation that und…
S43
WS #290 Sovereignty and Interoperable Digital Identity in Dldcs — Technical experts from CityHub and the OpenID Foundation discussed the complexity of transitioning from physical to digi…
S44
Tech attache briefing: Technical standards: Policy implications and international landscape — -Examine the policyimplications of standardsand discuss the interaction between standards and regulations. -Mapping the…
S45
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — It doesn’t mean that countries can’t have their own perspectives or sovereign outlooks, but there is sort of a… a move…
S46
https://dig.watch/event/india-ai-impact-summit-2026/building-population-scale-digital-public-infrastructure-for-ai — And this is what prevents innovation inside the government, especially because innovation comes with errors. We know tha…
S47
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The discussion highlighted the importance of policy interoperability rather than uniform global governance, recognizing …
S48
Digital sovereignty in Brazil: for what and for whom? | IGF 2023 Launch / Award Event #187 — Flavio Wagner:Thank you, Raquel. So, hi everybody. Nice to have you with us here this morning in Japan. So Brazil is a v…
S49
Review of AI and digital developments in 2024 — Approaches to digital sovereignty will vary, depending on a country’s political and legal systems. Legal approaches incl…
S50
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — High level of consensus with constructive disagreements mainly on implementation details rather than fundamental princip…
S51
WS #257 Emerging Norms for Digital Public Infrastructure — These key comments shaped the discussion by highlighting the complex, multifaceted nature of DPI. They moved the convers…
S52
e-Accessibility Policy Handbook for Persons with Disabilities — – Evaluate: how well are needs being met? Evaluative activities provide evidence on how well the concepts …
S53
https://dig.watch/event/india-ai-impact-summit-2026/collaborative-ai-network-strengthening-skills-research-and-innovation — They want to be co -architects of the future, this fundamental shift that humanity is going through. And this is where w…
S54
Democratizing AI Building Trustworthy Systems for Everyone — The historical perspective on technology diffusion offers both hope and urgency: success requires deliberate action acro…
S55
Laying the foundations for AI governance — Artemis Seaford: So the greatest obstacle, in my opinion, to translating AI governance principles into practice may actu…
S56
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Abhishek Singh: One part is that, of course, the way the technology is evolving, there is IP-driven solutions and there …
S57
WS #119 AI for Multilingual Inclusion — To achieve multilingual inclusion in AI, there is a need for innovation and local solutions. Communities should create t…
S58
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Mark Gachara emphasized that climate impacts are most severe in the Global South and among indigenous communities, so fu…
S59
Can we test for trust? The verification challenge in AI — ## Rapid-Fire Policy Recommendations 6. **Asymmetry** between rapid capability advancement and slower safety progress
S60
Policymaker’s Guide to International AI Safety Coordination — This comment crystallizes the fundamental tension at the heart of AI governance – the misalignment between market incent…
S61
Safe and Responsible AI at Scale Practical Pathways — Moderate disagreement level that reflects healthy debate about implementation strategies rather than fundamental opposit…
S62
AI governance struggles to match rapid adoption — Accelerating AI adoptionis exposingclear weaknesses in corporate AI governance. Research shows that while most organisat…
S63
Fireside Conversation: 01 — Gates Foundation is there. UNDP is there. The Kenyans. It’s a global coalition. Because what we learned from the agricul…
S64
Building Population-Scale Digital Public Infrastructure for AI — Launch 100 diffusion pathways by 2030 initiative with global coalition including Anthropic, Google, Gates Foundation, an…
S65
Panel Discussion AI & Cybersecurity _ India AI Impact Summit — And it requires four things, four ingredients. First of all, identity. How to remain human in a technological world. It …
S66
Panel Discussion Inclusion Innovation & the Future of AI — Thank you. So… So, you know, it’s been interesting because whenever we speak about AI at scale, when we talk about tak…
S67
AI for agriculture Scaling Intelegence for food and climate resiliance — “We will move from pilots to platforms, from fragmented data to interoperable systems, from experimentation to execution…
S68
Networking Session #37 Mapping the DPI stakeholders? — Ekanayake highlighted that DPI implementation requires government departments to work together in new ways around shared…
S69
https://dig.watch/event/india-ai-impact-summit-2026/building-population-scale-digital-public-infrastructure-for-ai — And why would we need a hub like this to do that? Well, one of the big barriers that we are currently seeing is the frag…
S70
Leading in the Digital Era: How can the Public Sector prepare for the AI age? — Modernising government processes is also on Barbados’s agenda, to align with the pace of technological development. The …
S71
AI as critical infrastructure for continuity in public services — Human adoption challenges center on fear of replacement, communication gaps, and the need for quality-focused rather tha…
S72
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — The significance of MCP extends beyond technical functionality to address vendor lock-in. As Sellitto noted, “Before MCP…
S73
AI Without the Cost Rethinking Intelligence for a Constrained World — Okay. Yeah. So I came in by accident but was really interested to hear what’s being discussed, especially MSET and the p…
S74
WS #187 Bridging Internet AI Governance From Theory to Practice — Sandrine ELMI HERSI: Thank you. And let me first start to say that it’s a real pleasure to… Thank you all for joining …
S75
From principles to implementation – pathways forward — Tomas Lamanauskas:Thank you very much, Robert, and it’s great to follow the very loyal colleagues who always finish on t…
S76
(Plenary segment) Summit of the Future – General Assembly, 5th plenary meeting, 79th session — Frederick Mitchell: As we look around today, there are wars and rumors of war. Some countries have marched into other …
S77
From India to the Global South_ Advancing Social Impact with AI — 60 ,000 crores is being put in our ITIs. So our ITIs are the grassroots organizations, government ITIs, there’s maybe mo…
S78
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — And finally, if there is any time left, we will see if any audience member wants to ask a question. Let me start off by …
S79
Building Sovereign and Responsible AI Beyond Proof of Concepts — This comment was exceptionally thought-provoking because it introduced a completely new dimension to the AI scaling prob…
S80
WS #98 Universal Principles Local Realities Multistakeholder Pathways for DPI — Bidisha Chaudhury from the University of Amsterdam raised a crucial point about the persistence of big tech influence ev…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
N
Nandan Nilekani
2 arguments171 words per minute531 words185 seconds
Argument 1
Rapid reduction of implementation time shows scalability
EXPLANATION
Nandan illustrates how the time required to deploy AI‑driven solutions fell dramatically across successive projects, demonstrating that once a pathway is established, later roll‑outs can be executed much faster. This showcases the potential for rapid scaling of AI applications for public good.
EVIDENCE
He described the first implementation in Maharashtra taking nine months, the subsequent rollout in Ethiopia completing in three months, and the Amul dairy-farmer system being deployed in three weeks, highlighting the acceleration from nine months to three weeks across projects [4-12].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Pathways reduced rollout time from nine months to three weeks through shared learning and institutional capability, as described in the AI diffusion discussion [S3] and reinforced in the fireside conversation with Nandan Nilekani [S19].
MAJOR DISCUSSION POINT
Implementation speed as evidence of scalable diffusion pathways
DISAGREED WITH
Trevor Mundeli
Argument 2
Goal of 100 diffusion pathways by 2030 as a strategic AI agenda
EXPLANATION
Nandan announces an ambitious target of creating one hundred AI diffusion pathways worldwide by 2030, positioning it as a collective strategic goal to spread positive AI use cases across sectors and countries. The aim is to coordinate global actors to achieve this scale.
EVIDENCE
He mentions the ambition of “100 diffusion pathways by 2030,” the formation of a global coalition that includes Anthropic, Google, the Gates Foundation, and UNDP, and calls the initiative the AI equivalent of the DPI goal of 50 in five years [15-29].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The announcement of a target of 100 diffusion pathways by 2030 is recorded in the session summary on global AI partnerships [S20] and referenced in the high-level session featuring Nilekani [S14].
MAJOR DISCUSSION POINT
Setting a global target for AI diffusion
AGREED WITH
Shankar Maruwada, Trevor Mundeli, Irina Ghose
S
Shankar Maruwada
1 argument133 words per minute1438 words645 seconds
Argument 1
Diffusion defined as spread of know‑how, trust and institutional capability
EXPLANATION
Shankar explains that diffusion is not merely awareness but the transfer of practical know‑how, trust, and institutional capacity that enables organizations to adopt AI safely and sustainably. This definition underpins the concept of diffusion pathways.
EVIDENCE
He states that diffusion is “the spread of know-how, trust and institutional capability that allows organizations to adopt AI safely and sustainably” and links it to the Maharashtra example as a pioneering pathway [43-46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Diffusion is defined as the spread of know-how, trust and institutional capability in the discussion of diffusion pathways [S3] and highlighted in the moderator’s remarks on historical context [S24].
MAJOR DISCUSSION POINT
Conceptual definition of diffusion
AGREED WITH
Esther Dweck, Trevor Mundeli, Nandan Nilekani
I
Irina Ghose
4 arguments163 words per minute1288 words473 seconds
Argument 1
AI must be contextual to local language, embedded in daily workflow, and iterative
EXPLANATION
Irina argues that for AI to diffuse at scale it must be delivered in the user’s native language, fit seamlessly into existing daily tasks, and be continuously refined through iteration. These conditions make AI feel intuitive rather than a specialized tool.
EVIDENCE
She lists three requirements: contextual to the local language, integrated into everyday workflow without new processes, and an iterative approach to implementation [60-62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for AI to be in the user’s native language, fit existing workflows and be iteratively refined is emphasized in the agrotech lightning talk on mobile-first tools [S22], the diffusion discussion [S24], and the multilingual internet briefing [S27].
MAJOR DISCUSSION POINT
Prerequisites for population‑scale AI deployment
AGREED WITH
Shankar Maruwada, Nandan Nilekani
DISAGREED WITH
Trevor Mundeli
Argument 2
“AI‑first” mindset and ecosystem enthusiasm are essential for diffusion
EXPLANATION
Irina stresses that individuals and organisations need to adopt an “AI‑first” attitude, actively champion AI within their networks, and foster an enthusiastic ecosystem to drive widespread adoption. This cultural shift is as important as the technology itself.
EVIDENCE
She says “first, I have to think that everything I do, I have to be AI first” and describes energising the Indian ecosystem and encouraging everyone to be enthusiastic about AI [72-74].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Irina’s call for an “AI-first” attitude and the importance of an enthusiastic ecosystem are quoted directly in the AI diffusion session transcript [S3] and echoed in the panel’s discussion on energising the ecosystem [S24].
MAJOR DISCUSSION POINT
Cultural and ecosystem drivers of AI diffusion
Argument 3
Failure is gradual loss of relevance; requires domain‑specific data and language support
EXPLANATION
Irina notes that AI deployments rarely fail abruptly; instead they fade as users stop finding them relevant. Maintaining relevance requires domain‑specific datasets and robust language support tailored to local contexts.
EVIDENCE
She explains that “failure never happens with a big bang, it just slowly dies because people just stop reducing the level of interaction” and highlights the need for domain-specific data and language nuances, especially for Indian languages [169-170].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Gradual decay of AI relevance and the need for domain-specific datasets and language nuances are discussed in the observation about vocabulary failure [S25] and the safe-AI guidelines recommending glossaries for domain terms [S26]; multilingual challenges are also noted in [S27].
MAJOR DISCUSSION POINT
Gradual decay as a common failure mode
Argument 4
Model Context Protocol (MCP) provides a universal adapter for safe AI integration
EXPLANATION
Irina introduces the Model Context Protocol as a standard that lets developers build AI components once and reuse them across sectors, similar to how UPI standardized digital payments. MCP aims to simplify integration and improve safety by providing a common interface.
EVIDENCE
She describes MCP as “what UPI was to payments,” a universal language that makes tools and data AI-ready, allowing seamless deployment across agriculture, health, and other domains [250-254].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Model Context Protocol is introduced as a universal adapter for AI components in the diffusion session [S3] and described as analogous to UPI in the U.S. AI standards overview [S28].
MAJOR DISCUSSION POINT
Technical standard to streamline safe AI deployment
AGREED WITH
Trevor Mundeli, Esther Dweck
DISAGREED WITH
Esther Dweck
E
Esther Dweck
5 arguments180 words per minute1938 words643 seconds
Argument 1
Procurement should be outcome‑oriented, allow controlled failure, and involve suppliers
EXPLANATION
Esther argues that public‑sector procurement must shift from a focus on lowest price and risk to an outcome‑oriented approach that tolerates controlled failure and engages directly with innovators. This change is necessary to foster AI experimentation within government.
EVIDENCE
She explains that current procurement seeks lowest price and risk, civil servants fear audits, and therefore innovation stalls; she proposes a policy-oriented, outcome-focused mindset and collaboration with suppliers to enable controlled failure [128-140].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to shift public-sector procurement to an outcome-oriented, failure-tolerant approach that engages innovators is outlined in the AI diffusion discussion [S3].
MAJOR DISCUSSION POINT
Reforming procurement for AI innovation
AGREED WITH
Shankar Maruwada, Trevor Mundeli, Nandan Nilekani
DISAGREED WITH
Irina Ghose
Argument 2
Build digital ID and a unified service platform (gov.br) as backbone for AI services
EXPLANATION
Esther highlights Brazil’s digital ID system and the gov.br unified service platform as critical infrastructure that enables personalized, AI‑driven public services. These digital foundations support identification, data sharing, and service personalization.
EVIDENCE
She describes the digital ID and the gov.br platform as enabling personalized services, allowing the government to know citizens and tailor AI applications accordingly [147-149].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Brazil’s digital ID system and the gov.br unified service platform are presented as foundations for AI-driven public services in the Brazil digital future session [S1] and in the whole-of-government digital ID overview [S29]; DPI parallels are drawn in [S30].
MAJOR DISCUSSION POINT
Digital infrastructure as AI enabler
DISAGREED WITH
Irina Ghose
Argument 3
Establish data governance, chief data officers, and sovereign data policies
EXPLANATION
Esther outlines a plan to create a national data governance framework, appoint chief data officers in each ministry, and launch a decree on data governance to break data silos and ensure responsible, privacy‑preserving use of data for AI.
EVIDENCE
She mentions the need for a Brazilian database, the upcoming decree on data governance, and the appointment of chief data officers to oversee data use and governance [150-162].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A national data governance framework, appointment of chief data officers and sovereign data policies are detailed in Brazil’s ministerial remarks [S1] and the data-governance workshop summary [S31]; similar recommendations appear in the AI diffusion discussion [S3].
MAJOR DISCUSSION POINT
National data governance for AI
Argument 4
Digital sovereignty requires data localization, resident clouds, and supplier negotiations
EXPLANATION
Esther stresses that Brazil must increase digital sovereignty by keeping data within national borders, developing resident cloud capabilities, and negotiating with suppliers to ensure continuity and security of services.
EVIDENCE
She discusses Brazil’s push for digital sovereignty, the creation of two state-owned companies with resident clouds, and efforts to bring data back to Brazil while negotiating with suppliers for greater control [290-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Brazil’s push for digital sovereignty, including resident cloud initiatives and supplier negotiations, is described in the Brazil session notes [S1] and the digital sovereignty discussion [S29].
MAJOR DISCUSSION POINT
Achieving digital sovereignty
Argument 5
Address wealth distribution and workforce impacts of AI automation
EXPLANATION
Esther points out that AI‑driven automation raises political‑economic questions about how the resulting wealth will be shared and how the workforce will be re‑skilled, emphasizing the need for policies that manage these societal impacts.
EVIDENCE
She notes concerns about a future where machines do all work, the challenge of dividing wealth generated by AI, and the broader workforce problem associated with automation [287-289].
MAJOR DISCUSSION POINT
Socio‑economic implications of AI
T
Trevor Mundeli
5 arguments167 words per minute1117 words399 seconds
Argument 1
Fragmented pilots hinder scaling; scaling hubs aggregate efforts and funding
EXPLANATION
Trevor observes that the proliferation of isolated AI pilots creates fragmentation, which impedes national‑scale impact. He proposes dedicated scaling hubs that pool resources, coordinate with governments, and provide a single point of aggregation to accelerate diffusion.
EVIDENCE
He describes the existence of scaling hubs in India and Africa, their role in aggregating funding and coordinating with ministries, and how fragmentation of many small pilots is a major barrier to scaling [84-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The proliferation of isolated AI pilots and the role of scaling hubs in India and Africa to aggregate resources are discussed in the AI diffusion session [S3] (lines 84-99).
MAJOR DISCUSSION POINT
Need for coordinated scaling hubs
AGREED WITH
Nandan Nilekani, Shankar Maruwada, Irina Ghose
DISAGREED WITH
Irina Ghose
Argument 2
Hubs act as centers of excellence to channel diffusion and achieve rapid national scale
EXPLANATION
Trevor further explains that these scaling hubs function as centers of excellence, providing structured pathways for governments to adopt AI safely and quickly, thereby converting pilot projects into large‑scale public services.
EVIDENCE
He notes that channeling diffusion through hubs of excellence is viewed by governments as the fastest route to scale, and that this approach aligns with the DPI stack in India [96-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Scaling hubs are described as centers of excellence that provide structured pathways for rapid national scale in the same AI diffusion discussion [S3] (lines 96-99).
MAJOR DISCUSSION POINT
Hubs as diffusion accelerators
DISAGREED WITH
Irina Ghose
Argument 3
AI systems must be auditable and transparent, not opaque black boxes
EXPLANATION
Trevor stresses that for high‑stakes applications, AI outputs need to be auditable and explainable; stakeholders must be able to trace decisions rather than accept opaque recommendations, ensuring accountability and trust.
EVIDENCE
He cites Anthropic’s research on making model recommendations auditable and argues that clinicians need to understand why a model made a particular suggestion, emphasizing the need for transparency [274-279].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The call for auditable, explainable AI outputs, citing Anthropic’s research, is raised in the AI safety segment of the diffusion session [S3] and reinforced by U.S. AI standards on transparency [S28].
MAJOR DISCUSSION POINT
Auditability as a safety requirement
AGREED WITH
Irina Ghose, Esther Dweck
Argument 4
Urgency to save lives must be balanced with robust safety frameworks; India’s DPI serves as a testbed
EXPLANATION
Trevor highlights the tension between the urgent need for AI‑driven solutions in health and education and the necessity of strong safety safeguards. He sees India’s DPI ecosystem as an ideal environment to pilot and refine safety frameworks before broader deployment.
EVIDENCE
He mentions the pressing need for malaria vaccines and personalized education, the call for safety frameworks, and the view that India’s DPI stack can act as a safe introduction point for frugal innovation [267-273].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Balancing urgent AI-driven health solutions with safety frameworks, using India’s DPI ecosystem as a testbed, is mentioned in the diffusion discussion [S3] and echoed in the broader conversation on AI adoption vs. energy constraints [S21].
MAJOR DISCUSSION POINT
Balancing speed with safety in high‑stakes domains
DISAGREED WITH
Nandan Nilekani
Argument 5
Emphasize frugal innovation for low‑ and middle‑income countries while maintaining safety
EXPLANATION
Trevor argues that AI solutions for LMICs should be built on frugal, cost‑effective innovations that still meet rigorous safety standards, ensuring that scalability does not compromise protection of users.
EVIDENCE
He refers to “frugal innovation” that is relevant across lower-middle-income countries and stresses the need to keep safety intact while scaling such solutions [271-273].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emphasis on frugal, cost-effective AI innovation for LMICs that retains safety standards is articulated in the AI diffusion panel [S3] (lines 271-273).
MAJOR DISCUSSION POINT
Frugal innovation as a safety‑aware scaling strategy
S
Speaker 1
1 argument74 words per minute83 words67 seconds
Argument 1
A group photograph should be taken before the panel discussion to foster unity and visibility among participants
EXPLANATION
Speaker 1 proposes that the panelists gather for a quick group photograph before starting the discussion, suggesting that this visual record and shared moment helps create a sense of cohesion and signals the collaborative nature of the event.
EVIDENCE
He thanks Nandan, states that they will start by taking a quick group photograph together and then begin the discussion, and proceeds to invite the panelists onto the stage for the photo [33-36].
MAJOR DISCUSSION POINT
Procedural step to promote cohesion and visibility
Agreements
Agreement Points
Creation of structured diffusion pathways/infrastructure to accelerate AI scaling
Speakers: Nandan Nilekani, Shankar Maruwada, Trevor Mundeli, Irina Ghose
Goal of 100 diffusion pathways by 2030 as a strategic AI agenda Diffusion defined as spread of know‑how, trust and institutional capability Fragmented pilots hinder scaling; scaling hubs aggregate efforts and funding Model Context Protocol (MCP) provides a universal adapter for safe AI integration
All speakers stress the need for repeatable, shared pathways-whether described as a global target of 100 pathways, the know-how/trust infrastructure, scaling hubs, or a universal technical protocol-to dramatically reduce implementation time and enable rapid, safe diffusion of AI solutions across sectors and countries [15-29][43-46][84-99][250-254].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for structured diffusion pathways is highlighted in discussions on building population-scale digital public infrastructure, where scaling hubs are proposed as aggregation points to coordinate AI rollout [S39] and the concept of co-architecting 100 AI diffusion pathways emphasizes coordinated, language-aware scaling [S53]. Open-source diffusion models further support structured pathways for broader access [S56].
Institutional and governance reforms are essential for AI diffusion
Speakers: Shankar Maruwada, Esther Dweck, Trevor Mundeli, Nandan Nilekani
Diffusion defined as spread of know‑how, trust and institutional capability Procurement should be outcome‑oriented, allow controlled failure, and involve suppliers Scaling hubs act as centers of excellence to channel diffusion and achieve rapid national scale Global coalition of diverse actors to work together on diffusion pathways
Speakers agree that changes in public-sector procurement, data governance, and the creation of dedicated hubs or coalitions are required to provide the institutional capacity and policy environment needed for scaling AI safely and effectively [43-46][128-140][84-99][22-27].
POLICY CONTEXT (KNOWLEDGE BASE)
Multistakeholder, bottom-up governance models are advocated to counter centralisation in internet governance and to enable diverse participation in AI diffusion [S40]; international efforts to align AI governance across the tech stack point to the necessity of institutional reforms and common standards [S45]; policy interoperability rather than uniform global governance underscores the need for adaptable institutional frameworks [S47]; and broader calls for democratising AI stress reforms in decision-making structures [S54].
AI solutions must be contextual, language‑specific and fit into existing workflows
Speakers: Irina Ghose, Shankar Maruwada, Nandan Nilekani
AI must be contextual to local language, embedded in daily workflow, and iterative Diffusion is the spread of know‑how, trust and institutional capability that allows organizations to adopt AI safely and sustainably Implementation timelines shortened through local adaptations (Maharashtra, Ethiopia, Amul) demonstrate the importance of contextual rollout
There is consensus that AI must be delivered in users’ native languages, integrated seamlessly into daily tasks, and iteratively refined, as this contextualization underpins successful diffusion pathways and rapid rollout [60-62][43-46][4-12].
POLICY CONTEXT (KNOWLEDGE BASE)
Diffusion requires AI to be contextual and workflow-integrated rather than a pure scientific tool, as argued in the DPI building report [S39]; multilingual inclusion initiatives stress local language solutions and community-driven systems [S57]; and the co-architecting pathways discussion highlights the importance of language-specific data and voice adoption [S53].
Safety, auditability and transparency are non‑negotiable for high‑stakes AI deployments
Speakers: Trevor Mundeli, Irina Ghose, Esther Dweck
AI systems must be auditable and transparent, not opaque black boxes Model Context Protocol (MCP) provides a universal adapter for safe AI integration Digital sovereignty and privacy safeguards are required for trustworthy AI services
All agree that AI must be built with mechanisms for auditability, standardised interfaces, and strong data/privacy safeguards to ensure trustworthy, safe deployment, especially in health and public services [274-279][250-254][290-304].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs note the asymmetry between rapid AI capability growth and slower safety progress, framing safety as a non-negotiable requirement [S59]; guidance for international AI safety coordination stresses the structural mismatch that must be addressed through robust safeguards [S60]; practical pathways for safe AI at scale call for balanced innovation and safeguards [S61]; and analyses of governance gaps show that rapid roll-outs have outpaced safety mechanisms [S62].
Similar Viewpoints
Both emphasize that a supportive ecosystem—through an AI‑first cultural mindset and robust digital public service platforms—drives adoption and effective use of AI at scale [72-74][147-149].
Speakers: Irina Ghose, Esther Dweck
“AI‑first” mindset and ecosystem enthusiasm are essential for diffusion Build digital ID and a unified service platform (gov.br) as backbone for AI services
Both view the creation of dedicated structures (hubs or pathways) that aggregate expertise and resources as critical to moving AI from pilots to institutionalised services [84-99][43-46].
Speakers: Trevor Mundeli, Shankar Maruwada
Scaling hubs act as centers of excellence to channel diffusion and achieve rapid national scale Diffusion defined as spread of know‑how, trust and institutional capability
Unexpected Consensus
Alignment of technical standardisation with data sovereignty goals
Speakers: Irina Ghose, Esther Dweck
Model Context Protocol (MCP) provides a universal adapter for safe AI integration Digital sovereignty requires data localisation, resident clouds and supplier negotiations
While Irina focuses on a technical universal protocol (MCP) to streamline AI integration, Esther stresses policy-driven data localisation and sovereignty; both converge on the need for standardized, controllable data interfaces that respect national control, an unexpected overlap between technical and policy domains [250-254][290-304].
POLICY CONTEXT (KNOWLEDGE BASE)
Public-private partnerships are identified as key to harmonising AI standards that support sovereign data policies [S42]; discussions on interoperable digital identity underline the need for standards that respect sovereignty while enabling cross-border interoperability [S43]; technical standards policy analyses map how organisations like ISO and ITU can bridge standards and regulation for sovereign objectives [S44]; the push for an ISO AI governance standard illustrates movement toward universal standards compatible with national sovereignty concerns [S45]; and regional debates on digital sovereignty in Brazil and Europe illustrate the tension and attempts to align standards with sovereign goals [S48][S50].
Overall Assessment

The panel shows strong convergence on four core themes: (1) establishing repeatable diffusion pathways, (2) reforming institutional and governance frameworks, (3) ensuring AI is locally contextualised and workflow‑integrated, and (4) embedding safety, auditability and privacy safeguards. These shared positions indicate a high level of consensus that coordinated technical standards, policy reforms, and ecosystem building are all required to achieve the ambitious 100‑pathway target by 2030.

High consensus across speakers, suggesting that future initiatives can build on this common ground to design integrated strategies that combine technical protocols (e.g., MCP), scaling hubs, outcome‑oriented procurement, and robust data governance, thereby increasing the likelihood of successful, safe, and inclusive AI diffusion.

Differences
Different Viewpoints
Rapid scaling versus safety safeguards
Speakers: Nandan Nilekani, Trevor Mundeli
Rapid reduction of implementation time shows scalability Urgency to save lives must be balanced with robust safety frameworks; India’s DPI serves as a testbed
Nandan emphasizes that AI-driven solutions can be rolled out extremely fast – from nine months to three weeks – and pushes the 100 diffusion pathways target as a strategic agenda [12-13][15-16]. Trevor counters that while speed is desirable, high-stakes applications (e.g., health) require strong safety and auditability frameworks, and he sees India’s DPI stack as a cautious testbed rather than a race for speed [267-273][274-279].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple sources highlight the tension between fast AI capability development and the slower evolution of safety frameworks, describing it as a core governance challenge [S59][S60]; recommendations call for balanced pathways that integrate safety without stifling innovation [S61]; and evidence shows that corporate AI governance has struggled to keep pace with rapid adoption [S62].
Centralised scaling hubs versus bottom‑up contextual diffusion
Speakers: Trevor Mundeli, Irina Ghose
Fragmented pilots hinder scaling; scaling hubs aggregate efforts and funding Hubs act as centers of excellence to channel diffusion and achieve rapid national scale AI must be contextual to local language, embedded in daily workflow, and iterative Model Context Protocol (MCP) provides a universal adapter for safe AI integration
Trevor argues that the proliferation of isolated pilots creates fragmentation and proposes dedicated scaling hubs to aggregate resources and provide a coordinated pathway to national scale [84-99]. Irina stresses that diffusion works best when AI is tailored to local languages, fits existing workflows, and is iteratively refined; she also proposes a universal Model Context Protocol to enable reuse across sectors, suggesting a more decentralized, technology-standard approach [60-62][250-254].
POLICY CONTEXT (KNOWLEDGE BASE)
The DPI report proposes scaling hubs as coordination points but also warns that over-centralisation can limit diversity, advocating bottom-up multistakeholder processes [S39][S40]; emerging norms for digital public infrastructure stress the need for governance models that balance central coordination with local contextualisation [S51].
Procurement reform versus cultural AI‑first mindset
Speakers: Esther Dweck, Irina Ghose
Procurement should be outcome‑oriented, allow controlled failure, and involve suppliers Build digital ID and a unified service platform (gov.br) as backbone for AI services AI‑first mindset and ecosystem enthusiasm are essential for diffusion
Esther calls for a shift in public-sector procurement from lowest-price, low-risk focus to an outcome-oriented, failure-tolerant approach that engages suppliers and reforms processes [128-140]. Irina, by contrast, highlights the need for an “AI-first” cultural attitude and ecosystem enthusiasm, focusing on user-level adoption rather than institutional procurement changes [72-74].
Digital sovereignty versus universal technical standards
Speakers: Esther Dweck, Irina Ghose
Digital sovereignty requires data localisation, resident clouds, and supplier negotiations Model Context Protocol (MCP) provides a universal adapter for safe AI integration
Esther stresses that Brazil must increase digital sovereignty by keeping data within national borders, developing resident clouds, and negotiating with suppliers to ensure continuity and security [290-304]. Irina promotes a universal Model Context Protocol that standardises AI integration across domains, potentially reducing the need for strict localisation and favouring interoperability [250-254].
POLICY CONTEXT (KNOWLEDGE BASE)
Ongoing debates on digital sovereignty in Brazil and Europe illustrate the clash between national data control and the push for interoperable technical standards [S48][S50]; policy analyses of standards organisations highlight how standards can both support and challenge sovereign objectives [S44][S45]; and broader discussions on policy interoperability stress the need to reconcile sovereign perspectives with universal standards [S47].
Unexpected Differences
Fixed diffusion pathways versus flexible universal adapter
Speakers: Shankar Maruwada, Irina Ghose
Diffusion pathways are fixed Model Context Protocol (MCP) provides a universal adapter for safe AI integration
Shankar describes diffusion pathways as “fixed” routes that compress learning curves and risk [104-106]. Irina’s proposal of a universal Model Context Protocol suggests a flexible, reusable technical layer that can adapt across sectors, implying that pathways need not be rigidly predefined. This conceptual clash between a fixed-route view and a modular standard was not anticipated given the overall consensus on diffusion.
Overall Assessment

The panel broadly agrees on the importance of establishing AI diffusion pathways to achieve public‑good outcomes. However, substantive disagreements emerge around the preferred mechanism: rapid, technology‑driven scaling versus institutionally coordinated hubs; speed of rollout versus rigorous safety and auditability; national procurement and sovereignty reforms versus cultural, AI‑first adoption; and whether diffusion pathways should be fixed routes or supported by flexible universal standards.

Moderate to high – while the end goal is shared, the divergent strategies indicate potential friction in policy design and implementation. These tensions could affect the pace and coherence of AI diffusion, requiring careful negotiation to align rapid deployment ambitions with safety, sovereignty, and inclusive governance.

Partial Agreements
All speakers concur that creating diffusion pathways for AI is essential for public‑good impact, but they diverge on the mechanisms: Nandan proposes a global target; Shankar defines diffusion conceptually; Irina stresses language‑level contextualisation; Trevor advocates scaling hubs to overcome fragmentation; Esther calls for procurement and institutional reforms. The shared goal is evident, yet the routes differ [15-16][43-46][60-62][84-99][128-140].
Speakers: Nandan Nilekani, Shankar Maruwada, Irina Ghose, Trevor Mundeli, Esther Dweck
Goal of 100 diffusion pathways by 2030 as a strategic AI agenda Diffusion defined as spread of know‑how, trust and institutional capability AI must be contextual to local language, embedded in daily workflow, and iterative Fragmented pilots hinder scaling; scaling hubs aggregate efforts and funding Procurement should be outcome‑oriented, allow controlled failure, and involve suppliers
All three agree that safety, accountability and good governance are prerequisites for scaling AI. Trevor focuses on auditability of model outputs; Irina highlights iterative, domain‑specific refinement; Esther points to national data‑governance structures and chief data officers. Each stresses a different layer of the safety stack but shares the overarching aim of trustworthy AI deployment [274-279][60-62][150-162].
Speakers: Trevor Mundeli, Irina Ghose, Esther Dweck
AI systems must be auditable and transparent, not opaque black boxes AI must be contextual to local language, embedded in daily workflow, and iterative Establish data governance, chief data officers, and sovereign data policies
Takeaways
Key takeaways
Diffusion pathways are the primary mechanism for scaling AI impact; the goal is 100 pathways by 2030. Implementation time can be dramatically reduced through learned pathways (9 months → 3 months → 3 weeks). Successful diffusion requires AI to be contextual (local language), embedded in daily workflows, and iteratively improved. Procurement in the public sector must shift from lowest‑price, low‑risk focus to outcome‑oriented, risk‑tolerant, innovation‑friendly processes. Fragmented pilots hinder scale; dedicated “scaling hubs” act as national centers of excellence to aggregate pilots, funding, and expertise. Robust digital public infrastructure (digital ID, unified service platforms, data governance, chief data officers) is essential for AI deployment. Safety and auditability are non‑negotiable, especially in high‑stakes domains; models need transparent, auditable interfaces (e.g., Model Context Protocol). Balancing rapid diffusion with safety is critical; frugal, low‑cost innovation can serve LMICs while maintaining safeguards. Digital sovereignty—control over data location and usage—is a major political/economic concern for countries like Brazil.
Resolutions and action items
Launch of a global coalition to develop 100 diffusion pathways by 2030, with partners including Anthropic, Google, Gates Foundation, UNDP, etc. Commitment to create and fund scaling hubs in Rwanda, Nigeria, Senegal, Kenya, and additional hubs in Africa and India. Brazil’s Ministry of Management and Innovation to roll out the INSPIRE (AI for Public Service) program, integrating government, state‑owned firms, and private sector. Adoption of outcome‑oriented procurement reforms in Brazil (and advocated for elsewhere) to enable controlled‑failure innovation. Development of a universal Model Context Protocol (MCP) by Anthropic to standardize AI integration across sectors. Continued partnership between India and Brazil on digital public infrastructure (digital ID, gov.br platform) and data governance frameworks.
Unresolved issues
Specific metrics and timelines for measuring the success of each diffusion pathway remain undefined. How to uniformly enforce auditability and transparency standards across diverse AI models and vendors. Mechanisms for ensuring long‑term sustainability and financing of scaling hubs after initial funding. Detailed strategies for achieving full digital sovereignty, especially data localization, without compromising interoperability. Approaches to address workforce displacement and equitable wealth distribution resulting from AI automation.
Suggested compromises
Adopt a policy‑oriented procurement approach that tolerates limited, managed failures rather than insisting on zero‑risk contracts. Use scaling hubs as a middle ground between completely open diffusion and isolated pilots, channeling resources while preserving innovation diversity. Implement outcome‑based incentives for suppliers, encouraging collaboration with innovators while maintaining accountability. Balance rapid rollout (speed) with safety frameworks by piloting in frugal‑innovation contexts before wider deployment.
Thought Provoking Comments
We call these ways of reaching the goal faster, we call them as pathways… we are now setting an ambitious goal for doing 100 diffusion pathways by 2030. A global coalition including Anthropic, Google, Gates Foundation, UNDP has been announced.
Introduces the central framing device of the discussion – ‘diffusion pathways’ – and sets a concrete, time‑bound ambition together with a multi‑stakeholder coalition, moving the conversation from anecdotal pilots to a coordinated global strategy.
Sets the agenda for the entire panel; every subsequent speaker references ‘pathways’, ‘diffusion’, and the need for scalable infrastructure. It prompts the panel to think about how to operationalise such pathways across sectors and countries.
Speaker: Nandan Nilekani
Diffusion is the spread of know‑how, trust and institutional capability that allows organisations to adopt AI safely and sustainably… like Sir Edmund Hillary climbing Everest – he creates a pathway for others; it would be stupid not to share it.
Uses a vivid historical analogy to clarify that diffusion is not just awareness but the transfer of practical capability, emphasizing the moral imperative to share knowledge.
Deepens the audience’s understanding of ‘pathways’, steering the discussion toward concrete mechanisms (shared rails, institutional capability) rather than abstract technology talk.
Speaker: Shankar Maruwada
AI deployment would seldom fail because of model complexity or performance. The only reason it fails to gain scale is perception of complexity. Three things are needed: contextual language, integration into daily workflow, and an iterative approach.
Challenges the common assumption that technical performance is the main barrier, shifting focus to usability, localisation, and continuous improvement.
Redirects the conversation toward language localisation and workflow embedding, which later leads to detailed mentions of Indic language support and the need for a universal protocol.
Speaker: Irina Ghose
We are investing in scaling hubs in India and Africa to aggregate fragmented pilots. Fragmentation across ministries and funders is a big inhibitor; hubs can channel diffusion into centres of excellence.
Identifies a systemic bottleneck (fragmentation) and proposes a concrete organisational solution (scaling hubs), moving the dialogue from isolated pilots to coordinated national‑level scaling.
Introduces the concept of ‘hubs’ that becomes a reference point for later discussion on institutional pathways and the need for coordinated governance structures.
Speaker: Trevor Mundeli
Procurement in government seeks lowest price and lowest risk, making civil servants fear mistakes. We need to shift to outcome‑oriented, policy‑oriented procurement and a culture that accepts failure as part of innovation.
Highlights a deep bureaucratic barrier—risk‑averse procurement—and offers a cultural‑change solution, linking procurement reform directly to AI scaling.
Triggers a deeper examination of internal state reforms, prompting further comments on digital ID, data governance, and the necessity of institutional change for AI diffusion.
Speaker: Esther Dweck
Failure never happens with a big bang; it slowly dies because people stop using it. Keep AI contextual to the domain, support local languages, and measure ROI by new use‑cases opened.
Provides a nuanced view of why pilots fade, emphasizing sustained relevance, localisation, and measurable impact rather than one‑off deployments.
Reinforces earlier points about language and workflow, leading to concrete examples of Anthropic’s support for ten Indian languages and the discussion of the Model Context Protocol.
Speaker: Irina Ghose
Technology has to be boring, invisible. When you stop thinking of something as technology, that’s when it has truly diffused – just like UPI is now invisible to users.
Frames diffusion as a cultural shift where technology becomes part of everyday life, not a novelty, providing a clear target for AI adoption.
Sets an aspirational benchmark that guides later suggestions (e.g., MCP, universal adapters) and underscores the need for seamless integration.
Speaker: Shankar Maruwada
OpenAgriNet shows how modular, locally adaptable infrastructure can serve smallholder farmers. We should replicate that model for personal health assistants in low‑ and middle‑income countries.
Extends the diffusion concept from agriculture to health, illustrating cross‑sector applicability and raising the stakes of scaling AI for human wellbeing.
Broadens the scope of the discussion to health, prompting safety‑focused comments and highlighting the need for auditable, trustworthy AI in high‑risk domains.
Speaker: Trevor Mundeli
We have introduced the Model Context Protocol (MCP) – think of it as what UPI was to payments. It provides a universal language for AI models to access tools and data, making integration repeatable and cheap.
Proposes a concrete technical standard that could act as the ‘shared rail’ for diffusion pathways, moving the conversation from abstract ideas to actionable infrastructure.
Creates a tangible focal point for model developers, linking back to earlier calls for universal adapters and reinforcing the ‘boring technology’ narrative.
Speaker: Irina Ghose
AI systems, especially in health, must be auditable. Black‑box recommendations are never adequate; we need transparency so clinicians can understand why a suggestion was made.
Emphasises safety and accountability, introducing the ethical dimension that balances speed of diffusion with risk management.
Shifts the tone toward caution, prompting discussion of governance, DPI stacks, and the need for robust auditing frameworks before large‑scale roll‑outs.
Speaker: Trevor Mundeli
Digital sovereignty is a major political‑economic challenge. We need resident clouds, data localisation, and mechanisms like age‑verification that protect privacy while delivering services.
Raises a macro‑level policy issue—national control over data and infrastructure—that underpins all technical diffusion efforts.
Adds a strategic layer to the conversation, linking technical pathways to sovereignty concerns and influencing the concluding remarks about future digital public intelligence.
Speaker: Esther Dweck
Overall Assessment

The discussion coalesced around the central metaphor of ‘diffusion pathways’, introduced by Nandan Nilekani and fleshed out by Shankar Maruwada. Key interventions—Irina Ghose’s focus on language, workflow and perception; Trevor Mundeli’s scaling‑hub model; and Esther Dweck’s procurement and sovereignty reforms—served as turning points that moved the dialogue from anecdotal successes to systemic challenges and solutions. These comments redirected attention toward institutional design, localisation, safety, and political economy, shaping a multi‑dimensional roadmap for scaling AI responsibly by 2030.

Follow-up Questions
How can we effectively measure the return on investment (ROI) of adding language support (e.g., Bengali, other Indian languages) to AI models to ensure it opens new use cases and benefits more people?
Irina highlighted the need to assess ROI when expanding language coverage, indicating a gap in metrics for evaluating impact of language localization.
Speaker: Irina Ghose
What is the optimal design and governance model for “scaling hubs” that aggregate fragmented AI pilots and accelerate their transition to national‑scale deployments?
Trevor described scaling hubs as a solution to fragmentation but did not detail how they should operate, suggesting further study on their structure and effectiveness.
Speaker: Trevor Mundeli
How can a universal “model context protocol” (MCP) be standardized, adopted, and integrated across diverse sectors and countries to serve as the AI equivalent of UPI for payments?
Irina introduced MCP as a potential universal adapter for AI tools and data, but its specification, governance, and adoption pathways remain unexplored.
Speaker: Irina Ghose
What concrete policies and technical architectures are needed to enhance digital sovereignty—especially data residency, operational access, and security—for countries like Brazil?
Esther emphasized Brazil’s push for digital sovereignty, noting the need for more research on data localization, sovereign cloud strategies, and related procurement practices.
Speaker: Esther Dweck
What methods and standards can make AI health recommendation systems auditable and transparent enough for clinicians and regulators to trust them?
Trevor stressed the importance of auditability in AI for health, indicating a research gap in developing practical, verifiable frameworks for clinical AI.
Speaker: Trevor Mundeli
How should policymakers balance the urgency of rapid diffusion pathways (e.g., 100 pathways by 2030) with rigorous safety safeguards in high‑stakes domains such as health and child protection?
Trevor raised the tension between speed and safety, pointing to the need for systematic safety‑by‑design guidelines that do not impede scaling.
Speaker: Trevor Mundeli
What privacy‑preserving, user‑friendly technologies can be deployed for age verification online to protect children while respecting data protection norms?
Esther described Brazil’s new law requiring age verification and the challenge of doing so without invasive surveillance, highlighting a research area in privacy‑enhancing verification.
Speaker: Esther Dweck
What metrics and evaluation frameworks should be used to assess the success and impact of AI diffusion pathways across sectors and geographies?
Nandan announced the 100 diffusion pathways goal but did not specify how progress will be measured, indicating a need for robust evaluation criteria.
Speaker: Nandan Nilekani
What specific reforms to public procurement processes can encourage innovative AI projects while managing risk and accountability within government agencies?
Esther identified procurement as a barrier to AI adoption and described mindset shifts, but concrete procedural reforms remain an open question for further study.
Speaker: Esther Dweck

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Climate-Resilient Systems with AI

Building Climate-Resilient Systems with AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel convened to explore how artificial intelligence can be mobilised to address the intertwined challenges of development, a sustainable planet, and climate-change mitigation and adaptation [11-16]. Organisers highlighted the Green Artificial Intelligence Learning Network (GRAIL) as a not-for-profit effort to create a collaborative ecosystem linking academia, industry, philanthropy and governments to align AI advances with climate and development goals [30-32][54-55][64-68]. They noted a historic lack of interaction between the AI research community and emissions-intensive sectors, with few exceptions, underscoring the need for joint initiatives [47-52]. Although data centres add 0.5-1.4 Gt of CO₂ annually, the potential AI-driven emissions reductions of 3.5-5.4 Gt outweigh this impact, justifying a focus on AI’s net climate benefit [60-63].


The inaugural GRAIL summit gathered 200 participants from 115 organisations, producing an online platform, government engagements, and sector-specific taxonomies to identify win-win decarbonisation opportunities [71-78]. Subsequent collaborations involve McKinsey’s cost-curve analysis, a partnership with the World Business Council for Sustainable Development covering 26 % of global GHGs, and work with UNESA to double clean-power capacity by 2030 [82-89][90-95].


David Sandalow reported that AI can contribute both incremental efficiency gains and transformational breakthroughs, but its climate impact is currently less than 1 % of total emissions, aligning with Grantham and IEA estimates [146-153][155-158]. He identified data scarcity, talent gaps and trust as primary barriers, urging every climate-focused organisation to dedicate AI expertise [158-164]. Sandalow illustrated AI capabilities-detecting methane from satellites, predicting weather for renewables, optimising power flows and simulating battery chemistry-as essential tools for the power sector, which accounts for 28 % of global emissions [170-188][196-199]. He also warned that real-time AI operations can introduce safety risks, requiring careful governance [205-208].


Google’s Vrushali Gaud described internal climate operations that use AI to improve data-centre efficiency, optimise water and electricity use, and open-source Earth-AI datasets such as flood-risk maps, while launching a Climate Tech Centre in India to foster actionable research across low-carbon steel, aviation fuel and green-skill development [250-267][280-291][298-305]. Spencer Low added that AI-driven satellite analysis now delineates smallholder farm boundaries and crop types, feeding into India’s Krishi DSS and enabling NGOs and startups to deliver climate-smart agronomic advice [311-329][330-338]. Nalin Agarwal and Dan Travers highlighted AI-enabled grid modernisation programmes that pair startups with utilities, pilot AI tools for solar forecasting and asset scheduling, and aim to prevent costly blackouts and emissions-intensive backup generation [364-389][392-401][418-421].


Academic partners such as UCL and the Alan Turing Institute showcased AI applications ranging from campus energy optimisation to emissions-cutting shipping and HVAC solutions, emphasising that rapid, collaborative action is essential to scale these gains [476-485][527-531]. Uday Khemka closed by reiterating that the session demonstrated numerous business opportunities and life-saving deployments, and called for “radical collaboration” to translate AI innovations into large-scale climate impact [533-540].


Keypoints


Major discussion points


Urgent call for radical, cross-sector collaboration – The opening remarks stress that the “triple challenge” of development, climate mitigation and adaptation must be tackled together and that the session is an “invitation for radical action-oriented collaboration” [11-13][18-20][28]. The GRAIL network is presented as a “collaborative network of great academic institutions, commercial institutions, AI companies, industrial companies…bringing them all together with governments” [64-66][71-78]. Throughout the panel the speakers repeatedly urge participants to join the effort and co-create solutions [73-78][84-89][533-540].


AI’s potential and concrete use-cases for climate mitigation and adaptation – David Sandalow outlines the findings of the AI-for-climate report: AI can deliver “incremental gains such as improving efficiency” and “transformational gains” in new tech and materials [146-154]. He notes that AI’s own emissions are < 1 % of total GHGs [155-158] but that the main barriers are “lack of data and a lack of trained personnel” [158-162]. He then illustrates specific capabilities – pattern detection (e.g., methane leaks), prediction (weather for solar/wind), optimization (power flows) and simulation (battery chemistry) [170-186] – and warns of emerging risks from real-time AI deployment [206-208].


Google’s operational AI initiatives and data-centric climate tools – Vrushali Gaud describes Google’s “full-stack” approach, from carbon-free data-center operations to water-leak detection and grid optimisation [250-285]. She highlights the open-source “Earth AI” data sets, the Flood Hub for flood-risk prediction, and the partnership with the Indian government to launch a Google Center of Climate Tech that will pilot low-carbon steel, sustainable aviation fuel and “green skills” programmes [288-306][304-305].


AI-driven transformation of the power grid and startup ecosystems – Nalin Agarwal and Dan Travers discuss the bottleneck of grid modernization, the need for AI to handle the new variability from distributed renewables, EVs and data-centers, and the creation of an open-innovation platform that has already generated dozens of pilots with utilities in the Global South [364-382][386-393][398-405][410-416][418-420].


Institutional pilots, research hubs and scaling frameworks – Additional speakers (UCL, Alan Turing Institute, McKinsey) showcase university-led Grand Challenges, AI-enabled building and cement optimisation, and McKinsey’s effort to quantify economic and emissions impact of AI solutions [469-485][514-529][430-466]. All stress the need to move from “knowledge on the shelf” to deployed, scalable climate actions [472-475][492-494].


Overall purpose / goal


The discussion is designed to mobilise a high-level, international coalition around the GRAIL initiative to accelerate the development, deployment and scaling of AI-driven solutions that simultaneously address climate mitigation and adaptation while supporting global development goals. Speakers present evidence of AI’s impact, showcase concrete projects, and repeatedly invite participants to join collaborative platforms, pilots and research programmes to turn ideas into rapid, large-scale climate action.


Overall tone and its evolution


Opening (Uday Khemka) – Highly enthusiastic, urgent, and motivational, emphasizing “exciting sessions,” “tremendous importance,” and a “call for radical collaboration” [1-7][11-13][28].


Middle (David Sandalow, Google, power-grid speakers) – Shifts to a more technical and evidence-based tone, presenting data, specific use-cases, and acknowledging challenges such as data gaps and safety risks [146-158][170-186][398-405]. The mood remains optimistic but grounded.


Later (institutional pilots, closing remarks) – Returns to a forward-looking, collaborative tone, highlighting successes, partnerships, and a strong call-to-action, while still acknowledging the “very little time” and the need for “radical collaboration” [430-466][533-540].


Overall, the conversation maintains a positive, high-energy atmosphere, punctuated by moments of sober realism about barriers and time pressure, but consistently steering toward collective, solution-focused momentum.


Speakers

Speakers from the provided list


Uday Khemka


– Expertise: Climate-AI collaboration, development-climate nexus, convening multi-stakeholder panels


– Role/Title: Moderator / Host, involved with the Green Artificial Intelligence Learning Network (GRAIL)


– Affiliation: GRAIL (non-profit based in London)


– Source: [S18]


David Sandalow


– Expertise: Climate policy, AI for climate mitigation & adaptation, author of AI-climate report


– Role/Title: Professor, former senior U.S. government official, AI-climate thought leader


– Affiliation: (Former) U.S. Government, author of AI-climate publication


– Source: [S3]


Spencer Low


– Expertise: AI applications in agriculture, digital public goods, satellite-imagery-based farm monitoring


– Role/Title: Google representative (AI for agriculture & food systems)


– Affiliation: Google


Vrushali Gaud


– Expertise: Corporate climate operations, decarbonization, water & circularity, data-center sustainability


– Role/Title: Global Director of Climate Operations


– Affiliation: Google


– Sources: [S7], [S8], [S9]


Adam Sobey


– Expertise: AI for sustainability, environmental forecasting, climate-focused AI research


– Role/Title: Director for Sustainability


– Affiliation: The Alan Turing Institute (UK’s National AI Institute)


– Sources: [S10], [S11]


Dan Travers


– Expertise: AI for grid management, renewable integration, open-source climate-tech solutions


– Role/Title: Founder / Representative, Open Climate Fix (non-profit AI-for-grid startup)


– Affiliation: Open Climate Fix


– Sources: [S12], [S13], [S14]


Ankur Puri


– Expertise: AI consulting, quantum analytics, sector-wide AI impact assessment (energy, built environment, materials)


– Role/Title: Partner, leads Quantum Black (McKinsey’s AI practice) in India


– Affiliation: McKinsey & Company


– Sources: [S15], [S16]


Speaker 1 (identified as Rob)


– Expertise: AI integration across university research, climate Grand Challenges, AI-enabled sustainability projects


– Role/Title: Speaker / Representative, University College London (UCL)


– Affiliation: University College London


– (Information from transcript)


Nalin Agarwal


– Expertise: Climate-tech incubation, grid modernization, AI-driven power sector innovation in the Global South


– Role/Title: Founding Partner, Climate Collective


– Affiliation: Climate Collective (partnered with UNESA)


– Sources: [S19], [S20]


Additional speakers (not in the provided list)


Sean – Briefly mentioned as the person who negotiated extra time for the panel; no speaking role or title detailed.


(No other speakers were identified beyond those listed above.)


Full session reportComprehensive analysis and detailed insights

The session opened with Uday Khemka framing the “triple challenge” of fostering development, creating a sustainable planet, and tackling climate-change mitigation and adaptation simultaneously [11-13]. He warned that the limited time allotted for the panel was a metaphor for the shrinking window to act on climate and clarified that the format was “not a real panel, there will be no discussion, just rapid ‘boom-boom-boom’ updates and a switch-eroo of speakers” [17-20][28]. Khemka positioned the meeting as an invitation for radical, action-oriented collaboration [17-20][28] and announced a partnership between GRAIL, McKinsey, and the World Business Council for Sustainable Development (WBCSD), which brings together 250 companies representing roughly 26 % of global GHG emissions and 24 % of world revenues [31-34].


GRAIL overview


Khemka described the Green Artificial Intelligence Learning Network (GRAIL) as a not-for-profit initiative that aims to align rapid AI advances with development and climate agendas by creating a collaborative network of leading academic institutions, commercial firms, AI companies, industrial players, and governments [64-66]. The first GRAIL summit in London convened 200 participants from 115 organisations, produced an online collaborative platform, engaged governments, and generated sector-specific taxonomies to identify “win-win” decarbonisation opportunities [71-78].


AI-for-climate report


David Sandalow presented the AI-for-climate report, noting that it was authored by a 25-expert team that includes Song Lee, former head of the IPCC, underscoring its credibility [150-152]. He contrasted the Grantham Institute’s estimate of 0.5-1.4 Gt CO₂e from data-centre operations with a potential 3.5-5.4 Gt CO₂e reduction enabled by AI-driven climate solutions, arguing that AI’s net benefit is substantial [60-63]. Sandalow highlighted that AI emissions are “less than 1 % of total greenhouse-gas emissions” [155-158] and outlined four high-level AI functions [170-176][184-185][186-188][186-187]:


1. Detect patterns (e.g., methane leaks from satellite data)


2. Predict outcomes (e.g., weather for solar and wind farms)


3. Optimize processes (e.g., power-flow optimisation)


4. Simulate systems (e.g., battery chemistry).


He warned that real-time AI deployment can introduce security and safety risks-especially with generative AI-and that “trust is essential” for organisations to adopt AI solutions [206-208][161-162]. The main barriers identified were data scarcity, a shortage of trained personnel, and the need for trustworthy models [158-162][164].


Google’s full-stack approach


Vrushali Gaud illustrated Google’s “full-stack” strategy, which moves beyond search to optimise data-centre energy use, secure carbon-free electricity, reduce water-tap leaks, and improve grid utilisation [250-277]. Google is open-sourcing large climate-relevant datasets, including Earth AI satellite imagery and the Flood Hub flood-risk maps that serve insurers, real-estate developers, and other stakeholders [288-295]. The company announced a Climate Tech Centre in India to incubate pilots in low-carbon steel, sustainable aviation fuel, and “green-skill” programmes for tier-two cities [303-305]; further details are available on sustainabilitygoogle.com [350-353]. Spencer Low expanded the conversation to agriculture, noting that “30 % or more of greenhouse gases are in some way related to the food system” [311-313] and describing AI-driven tools that delineate smallholder farm boundaries, classify crops from multispectral imagery, and detect agronomic events such as sowing or harvest [314-328]. This data feeds India’s Krishi DSS and state-level platforms, enabling NGOs and startups to provide climate-smart advice to farmers [330-338][332-334].


Power-grid focus


Nalin Agarwal introduced the Climate Collective’s AI-for-Power Innovation Platform, a six-year programme partnered with Graylon that pairs startups with 22 utilities across the Global South, runs a high-conversion pilot programme (≈30 % of proposals become deployments), and maintains an online solution database [365-367][386-393]. Dan Travers added that the modern grid now faces three primary sources of variability-demand, wind speed, and cloud cover-while additional loads from EVs and data centres increase the complexity of real-time balancing [398-401][402-405]. He warned that without AI-driven scheduling, dynamic line rating, and optimal power-flow analyses, costly gas-fired backup generation and blackouts could undermine public support for the green transition [406-410]. Travers also highlighted Open Climate Fix’s collaborations with Indian utilities Adani and the Rajasthan Grid Operator to open-source a solar-forecasting model that outperforms UK forecasts by 20-30 % and is being transferred to India [399-402][417-420].


McKinsey’s taxonomy


Ankur Puri presented McKinsey’s strategic taxonomy that groups AI opportunities into four challenge categories: operational improvement, strategic intelligence, transformation, and autonomous operations [444-453]. His team is quantifying both economic and emissions impact for each use case to ensure scarce resources are directed toward the highest-value interventions [464-467]. This impact-focused approach complements GRAIL’s agenda of rapidly scaling solutions while securing funding from grants, venture capital, and corporate sources [68-69].


Academic contributions


Rob [LastName] from University College London (UCL) highlighted the institution’s AI heritage-UCL holds three Nobel Prizes, is the birthplace of DeepMind, and its Grand Challenges span all 11 faculties [480-483]. The Grand Challenges embed AI across the university, delivering projects such as campus-wide energy-demand forecasting, a carbon-reduction digital twin for cement production, a partnership with PGM Real Estate for AI-enabled sustainable buildings, short-term AI interventions for aviation emissions, and long-term research on electrification and hydrogen propulsion for aviation [486-492]. An open-source sea-ice classification tool for Inuit communities was also showcased [476-485].


Adam Sobey from the Alan Turing Institute framed the Institute’s work around five “missions”: environment, sustainability, defence & security, health, and foundational research [520-523]. Concrete emissions cuts demonstrated include an 18 % reduction in shipping emissions, a 42 % cut in building HVAC emissions, and a renewable-powered underground urban farm [527-531][514-529].


Consensus and points of tension


Across the panel there was strong consensus that immediate, radical collaboration is essential, that AI’s net climate benefit far outweighs its own carbon footprint, and that multi-stakeholder platforms are the most effective way to scale solutions [11-13][28][30-32][64-66][71-78][146-154][250-285][311-329][364-393][444-453][476-485]. Speakers repeatedly invited participants to join the GRAIL online platform, contribute to open-source data assets, and engage in pilot programmes.


Moderate disagreements emerged. While Khemka urged rapid deployment to match the climate timeline [17-20], both Sandalow and Travers cautioned that real-time AI can introduce security and safety risks and that trust must be built before widespread adoption [161-162][206-208]. Data availability was another point of tension: Sandalow highlighted data scarcity as a primary barrier [158-160], whereas Gaud presented Google’s open-source initiatives as a solution, and Khemka acknowledged ongoing gaps despite the new collaborative platform [73-75][288-295][350-353]. Finally, preferred scaling models varied-Travers advocated open-source, transferable tools; Agarwal promoted a structured startup-utility pilot ecosystem; Puri called for rigorous impact quantification before scaling; and Gaud focused on internal corporate optimisation coupled with external partnerships [417-420][364-393][464-467][250-285].


Closing and next steps


Khemka concluded by reiterating that the panel had demonstrated “hundreds of examples of opportunities where businesses can save money or increase revenues while massively improving their emissions profiles” [535-536] and highlighted existing deployments that are already saving lives [537-538]. He called once more for “radical collaboration” to translate AI innovations into large-scale climate impact [539-540]. The session ended with a clear set of next steps: join the GRAIL platform, expand open-source climate data, accelerate pilot programmes in power, agriculture, and industry, and develop governance frameworks that address trust, security, and data-quality challenges while maintaining the urgency demanded by the climate emergency [354-357][422-426][533-540].


Session transcriptComplete transcript of the session
Uday Khemka

Very exciting sessions. I’ll just wait. Guys. So we are meeting for a tremendously important subject. And this has been a great summit. I know you’re all energized, inspired, excited and exhausted at the same time. And we will get a moment when this subject becomes a room of 5 ,000. So that’s what we’re going to work towards. But you’re here today. We’re delighted to have an absolutely tremendous panel with us today. I’m deeply honored, flown across from the U .S., from Europe, from Singapore and so forth. And we have a lot of material to cover. I should say that the triple challenge that we are dealing with in this panel is perhaps the most important challenge any of us will face in our lives.

Which is to promote development on the one side. While dealing with the creation of a sustainable planet. and in terms of climate change, your self -selecting group, you’re all here with us and there’s a reason for that. You already know about climate change. You already know about AI. Is that, as you know, we are not necessarily winning the battle on climate as yet and so we need to deal with both mitigation and adaptation and this panel will address both of those two things. We have very little time in the panel so I am going to speed along but that’s a good metaphor for the very little time we have to do something about climate change.

So we’re in action mode. It’s a call for collaboration. We’re not going to be, I apologize to our speakers and panelists for a number of things. One, this is not a real panel. We’re not going to be having discussions. This is just boom, boom, boom, talking to you about what everyone’s doing. Secondly, there’s going to be a kind of switcheroo moment when some other speakers come up and some of us are replaced up here. Apologies for that. It’s just the intensity of the panel and for all those things I apologize in advance but I don’t apologize. for the incredible quality of our panelists today. These are amazing people. And I would just end by saying that this is not a normal session.

This is an invitation for radical action -oriented collaboration with all of you. On that basis, let me begin by talking a little bit about a summit we held last year in London and the background to it and an organization that some of us are very deeply involved with and almost everyone here is a friend of called the Green Artificial Intelligence Learning Network. You will immediately note it has the cutesy acronym of GRAIL, like the Holy Grail. And what we’re trying to do is really see what the synergy is between the development agenda and the climate agenda through the application of AI. I’m going to speed through this. We’re going to then move to Professor David Sandelow, who actually anchored our summit last year, which is the first major global summit on the application of AI, to climate change.

and has very kindly flown in from Colombia. I’m going to ask our speakers one more favor, which is instead of my coming up and introducing all of you, if you don’t mind introducing yourselves, that will speed us along the way. So let’s go through this. So as you all know, and perhaps Professor Sandler will talk about it, the IPCC gave us a target, 43 % decarbonization from 2019 levels to 2030 levels. We were meant to reduce GHG emissions by that amount. In 2021, some of us at COP26 in Glasgow had a meeting to look at the likelihood of that happening. And we came to the conclusion that the likelihood was very low. And therefore, traditional approaches to climate mitigation and adaptation needed to be enhanced with new solutions.

And we thought, what was J -curving as fast as climate change was J -curving? And the only thing… There was nothing we could think of. There was nothing we could think of. There was nothing we could think of. was the application of AI, this great new suite of technologies, including, of course, quantum and all the other things that go with it. And we started to talk to people, and we talked to a whole bunch of people in the AI community, a whole bunch of people in the industrial and power, automotive, all the different sectors that produce emissions. And we said, are you talking to each other? And shockingly, people were not talking to each other.

There’s very little going on with some honorable exceptions at Google. Very few people were really in the AI community focusing on downstream issues around climate change. And similarly, the big industrial domains were not really focused on the use of AI for decarbonization and economic value creation. So with that lens, think about this session as throwing one J curve against another J curve. Can we throw the crazy increase in AI technology represented by this great summit against the world’s greatest challenge? That’s the purpose of the Grail organization, which is a not -for -profit based in London. It’s a vast terrain. We don’t have time to cover it all. It’s obviously mitigation. It’s obviously adaptation. We have to hit both.

And within that, there are endless taxonomies of all the wonderful things that AI can do. And, of course, you’ll be worried about the increased Gs from data centers, but that’s not the primary focus of our session today. That has been quantified. The Grantham Institute did a quantification last year of 0 .5 to 1 .4 gigatons of extra GHGs from data centers, but that’s from every kind of utilization against the potential benefit of 3 .5 to 5 .4 gigatons being sucked out of extra GHG emissions. So there’s clearly a very strong balance towards what AI can do to helping the planet in its shift towards a clean and green economy. Grail, what’s Grail? Grail is an attempt to create… …to create a collaborative network.

of great academic institutions, commercial institutions, AI companies, industrial companies, philanthropic institutions, private sector sustainability networks like WBCSD, bringing them all together with governments to try and create massive collaboration. In the next slide at the bottom, you see that same group of institutions. Bottom right, the ideas and deal flow. Going back into Grail, bottom left, the fact that this becomes a collaborative community to get all these solutions scaling at speed and at the top, then getting that deal flow funded through grants, through government programs, through venture capital, corporate funds, but to move the agenda to real solutions at massive scale as quickly as we possibly can. All of this led to a summit that occurred that I mentioned earlier last year.

Sean, will you keep me real on the time? Thank you. Okay. And that led to… 200 people. 115 organizations, including all the organizations represented here today, 60 speakers, and we looked at AI for power, AI for building materials, AI for everything you could think of vertically and horizontally, looking at the issues of materials innovation, looking at the issues of value chains, looking at carbon markets, and so forth. What has happened after the summit? Three things. One, we’ve created an online collaborative platform, and we invite all of you to join it, to co -create those solutions that can make a difference. Second, we’ve started to engage with governments around the world. Imagine a summit like this that was focused, yes, on development, but with a central climate focus as well.

How amazing would that be? And most importantly, we’re focusing on taxonomies that lead to massive calls for action from the innovation community. So we started to work on taxonomies for a variety of sectors, the energy sector, the built environment, materials innovation, and we worked with groups of AI experts and power experts and figured out what the big wins were, what are the big opportunities for companies to create economic value while at the same time massively decarbonizing. And this was an intellectual process, including many experts, some of whom are here today, that led to this astonishing work and identified the big win -win opportunities for economic value creation and decarbonization. On the bottom right, the teams from academia, industry, industry associations, a variety of other people and countries, eight country teams as well were involved in various ways.

So where’s this all going? Well, thank you to McKinsey for your very kind collaboration to after we had done all that work saying, hey, we want to help and kicking in and working with us to further refine those offerings and look at cost benefits and cost curves and all sorts of things. Delighted about it. And then there are, apart from working on the power sector, to look at generally what we can do, apart from working on the built environment, generally what we can do, apart from looking at materials innovation, generally what we can do to accelerate solutions for decarbonization through AI. We have two big partnerships that are emerging. One, and I want to slow down here.

Okay, 250 companies are part of the World Business Council, their network. That represents, in scope one, two, and three, 26 % of World GHGs. They’ve realized that that’s mainly in supply chains who are going into a partnership, so is McKinsey, so are other partners, to look at what are the AI opportunities to take startup and scale -ups into these decarbonization opportunities at massive scale with the 250 largest companies in the world representing 24 % of world revenues and 26 % of World GHGs. Finally, working with coalitions of energy companies, and Nalin, you’ll hear, and we’ll hear from later on, and we’re deeply partnered with Nalin. on this and others on this. How can we take this into accelerating? For example, UNESA has 71 energy companies, 750 gigawatts of clean power.

They want to go to 1 ,500 gigawatts of clean power by the end of the decade. How can we help them with AI? It’s a very practical lens. We invite you to join us and be part of this. On that note, I would like to briefly to invite to introduce Professor David Sandlow. David, I’m not going to go through your very distinguished background to take the whole panel to do that, except to say that you have worked in every different field, most importantly in the past in very senior positions in government, but now, of course, you’ve been the luminary on AI solutions on climate. That is the worst introduction you’ve ever got. I’m sorry about that, just in the interest of time, but I’m…

Really very honored that you’ve flown all the way to come here, and I’m handing them over to you. to you.

David Sandalow

Thank you so much. Uday. Uday, thank you. Your energy, your enthusiasm, your passion, they’re all infectious. And your intellect is remarkable. What you’re driving forward in this area is world changing. You are not just an inspiration, you’re a gravitational force that are pulling people together to work on this, so thank you so much. Your, what you did in London was remarkable. What you’re doing here is incredible and I’m looking forward to being part of what you’re doing in the future. So I’ve, it’s been my privilege to lead some teams that have been working on these issues over the course of the past couple of years and I’m going to talk about one of the projects that we’ve done.

It looks like the slides are not there. There’s a certain, turning on the screen. There it goes. I will say that while we wait, I’ll say that I really like the metaphor that you had, Uday, about two hockey sticks. And this is just a remarkable convergence of two of the most important trends that are happening in human history right now. One of them is, alas, the increase in greenhouse gas emissions, which is happening at such an astonishing pace. But the second is the exponential growth of the capacities of artificial intelligence. What’s driving me is we need to find a way to make sure that that second trend, artificial intelligence, helps to solve the first problem.

And that’s the study that we did, which we brought together a team of 25 experts. Just wonderful people. One of them was . Song Lee, the head of the IP, last head of the Intergovernmental Panel on Climate Change, and some top experts. And the question we asked was, it was very simple, how do you use artificial intelligence to reduce greenhouse gas emissions? There it is. It came up on the screen. Thank you so much. I appreciate it. And so it’s a very simple question. How do you use AI to reduce emissions of greenhouse gases? We came up with 17 chapters. We wanted to do more than just provide analysis. We wanted to provide actionable ideas for what to do.

So every chapter has recommendations. You can find a print version available on Amazon, and free downloads of the entire volume, including chapter -by -chapter versions, are available at these websites. I want to thank the government of Japan, including NITO and MEDI, for supporting this work. They’ve been very important supporters of work on AI and clean energy more broadly. Oh, I’m going to promote my podcast later. But I have a podcast that’s talking about this topic as well. But so here’s our – Here’s our – Here’s the table of contents for our work. We talked – We have introductions to both AI and climate change in this volume. One of the things we’re trying to do is target this both to experts and to people who are beginners in this topic.

And, you know, Uday talked about bringing together different communities. One of our basic conclusions was we need to bring together experts in climate change and experts in AI. And there are a lot of people who know a lot about climate change but don’t know a lot about AI. A lot of people who know a lot about AI, they don’t know about climate change. So we decided to have primers on each of those topics. And then we talk about eight different sectors and a number of cross -cutting topics. So we have five key takeaways. This was an interesting exercise with all of our authors taking 300 pages and trying to distill it into five key takeaways. But here’s what we came up with.

The first one, I mean, this is a kind of bottom line, but it’s important. AI does have significant potential to contribute to reductions in greenhouse gas emissions. And we categorized it with two categories. One is climate change. It’s incremental gains such as just improving efficiency. It’s output itself. And then we have the other category, which is the environmental impact. solar farms, building energy efficiency. There are lots of incremental gains that can be made, but also transformational gains. In particular, new tech, new materials and other things. We looked at whether greenhouse gas emissions are causing increases in, or greenhouse gas emissions are increasing as a result of computing operations. We decided, based on the available evidence, that the best estimate is less than 1 percent and maybe much less than 1 percent of greenhouse gas emissions are currently coming from AI.

That tracks with the Grantham study that Uday talked about. That tracks with what the IEA has said as well. The main barriers to AI’s impact in reducing greenhouse gas emissions are a lack of data and a lack of trained personnel. There’s other barriers as well, but obviously you need data. A lot of places we don’t have the data for this purpose and you need people. Trust is essential. People aren’t going to use AI unless they trust it. And then every organization with a role in climate change mitigation should consider opportunities for AI. And we need AI to contribute to its work. I think as AI grows in the public… consciousness at summits like this that’s becoming less and less of a kind of radical recommendation.

But it’s just so important. I think if you’re working in climate mitigation, you need to have a team dedicated to AI and how AI can help. So I’ll just run through quickly because we only have a little bit of time, some of our chapters. We have a chapter on the introduction to AI, and if you’re a climate person that doesn’t know a lot about AI, this might be helpful to you. And one of the things we did is we broke down AI capabilities into four basic categories at a very high level. The first thing AI can do is detect patterns. And how can that be helpful in climate change mitigation? Well, one example is detecting methane emissions from satellite data.

You know, some of you probably know this, but I mean, we know much, much more today than we did 10 years ago about methane emissions. And that has helped us dramatically to begin to reduce methane emissions. That’s entirely dependent upon the optical sensing process. And we’ve been able to do that over the last 10 years. So, you know, we’ve been able to do that over the last 10 years. And we’ve been able to do that over the last 10 years. So, you know, we’ve been able to do that over the last 10 years. And we’ve been able to do that over the last 10 years. And we’ve been able to do that over the last 10 years. So, you know, we’ve been able to do that over the last 10 years.

And we’ve been able to do that over the last 10 years. impact so far. AI can also predict, such as weather patterns at solar and wind farms. It can optimize, such as power flows on transmission lines. And it can simulate, such as battery chemistry action. So I think for me, in fact, I’m teaching a course at Columbia right now where we’re emphasizing this framework of detecting, predicting, optimizing, and simulating. And those are, broadly speaking, the capabilities that AI brings to the table. A lot to say about climate change, but just for those who aren’t paying attention, atmospheric concentrations of heat -trapping gases are now higher than any time in human history. In fact, higher than any time in the past three million years.

And July 22nd, 2024, was the warmest day ever recorded. 2024 was the warmest year ever recorded by far. And the warmest 11 years ever recorded were in the last 11 years. So we are living in an era of climate change. We do deep dives into a number of different sectors. I’m just going to talk about a few of them. Power sector is… is maybe the most important just because it’s already 28 % of greenhouse gas emissions, and our strategy for reaching decarbonization requires us to electrify lots of things. So we need to grow the power sector and decarbonize the sector at the same time. I don’t think we’re going to be able to do that without AI tools. AI is already helping decarbonize the power sector, optimizing location of generation transmission, increasing output at solar farms, but it can do much more.

Dynamic line rating is optimal power flow analyses. But to do this, we need standardized data. We need trained personnel. The utility business model is a challenge. So this is a really important area that requires a lot of attention and work. Oh, and a final point, the last bullet on this slide. Using AI in real -time operations can cause real security and safety risks. So we need to be very careful about generative AI in context. So even as we look to deploy AI to help reduce greenhouse gas emissions, we need to be very attentive to these risks. risks. I kind of find it amazing how few people pay attention sometimes to food systems and climate change, that 30 percent or more of greenhouse gases are in some way related to the food system, and the food system has, is threatened by climate change.

AI can do a lot to improve both mitigation and resilience in the food system. We just, there’s a few examples here, integrating data from soil sensors to create fertilizer management plans, creating virtual farms. There’s lots of things that can be done here. But coming back to this issue of lack of data, it’s a huge problem, especially in the Global South. So the efforts to build up a digital public infrastructure that are happening here in India are so important in this regard. I’m going to go quickly here. We look at buildings where there’s tremendous potential. I think materials innovation is one of the most important areas. And, you know, 150 years ago, when Thomas Edison invented the modern light bulb, he literally spent a year, he spent a year, he spent a year, he spent a year, he spent a year, he spent a year, he spent a year, he spent a year, running electricity through dozens, I think hundreds of different filaments to figure out how much light and heat would be produced.

So today, we can simulate a million of those interactions in a second. And there’s already tremendous advances in the pace of innovation in battery chemistry and some other areas using AI tools. And for me, this is one of the most promising areas in terms of transformational gains in reducing greenhouse gas emissions. Extreme weather response is extremely important from a resilient standpoint, and we don’t have a lot of time to get into it, but I think that the AI ML -enabled forecasting is transformational because it’s so much cheaper, for example. I mean, at really 1 ,000x the cost, 1 ,000x less the cost, we can run AI ML weather prediction tools and make a big difference on extreme weather response.

We have findings and recommendations throughout this report. You can see it here, again. We just did a new report in the same series on sustainable data centers. And our main message is there are that with this data center construction boom happening now, this is the time to be paying attention to data and sustainability. We are investing right now in multi -decade assets. We need to be paying attention to this. Smart siting is a key. And finally, here’s a plug for my podcast. It started about a year ago. I’ve had some great guests, Jensen Huang, Dami Lola, Ogunbi, the head of sustainable energy, for all Jennifer Granholm, the U .S. Energy Secretary under Biden. Listen, as they say, available on all major podcast platforms.

Uday, once again, thank

Uday Khemka

I feel horrified we’ve got speakers of this caliber and so little time. So thank you so much for your leadership. May I invite both of you to speak? You can speak from here if you prefer. We’ve got two great leaders from Google. Obviously, you know, in the sphere. of corporate AI leadership on climate, there is no one that parallels all of you and we look forward to hearing from you your thoughts. Thank you.

Vrushali Gaud

Thank you very much for hosting this. I don’t know if that’s a privilege or that’s pressure when you start with that sentence about the leadership position Google has. I just want a quick question. Raise of hands, how many of you have used Google today for either maps or searching something? Thank you. So you know who we are. This is my car. I’m Vrushali Gaud. I’ll introduce myself and then Spencer you can answer that. I lead Google’s in a nutshell decarbonization water and circularity strategy for the company. Essentially what that means is I’m responsible for quite a few things that you had in your slide that we should be doing and a good way to introduce myself is also I like getting things done and so I feel like my inner calling around this is we’ve had a lot of conversations we’ve had a lot of playbooks and research and things But it’s almost like, how do you act on it?

How do you execute on it? How do you start delivering the outcome that I think we all are looking for? So that’s the kind of space that I come from. And a privilege to be at Google, who allows us to kind of expand that space. So the reason I asked you all to show your hands, most of you know Google as a search or map and similar pieces, information source. One of the other things Google is now, I think, as a company, is a full -stack company. And when I say full -stack, that means the search and the information is a top layer of it. But underneath that sits the entire physical infrastructure that drives that.

And so that’s data centers. That’s the way you operate that. That’s the networks that feed into all of the applications. And so when we look at climate, and my title actually is Global Director of Climate Operations. So I say that out of humility because when we look at climate, we’re trying to put it across our operations the best we can. Thank you. So we, and good examples of that, I’ll start with data centers. The big topic right now, how do we operationalize them? Where do we cite them? The location, what impact it has on the community, what impact it has on the infrastructure there. Citing is a big part of it. Access to clean energy is something we’re looking at, and pretty much we have a carbon -free energy goal.

So I think for us, if you look at climate, a big portion of climate is emissions. How do we impact emissions? It’s from electricity. What do we do with electricity? Shift to clean energy. Or renewables. And so that’s the spectrum that we look at. And so a lot of our investments are in carbon -free energy and how we think about it. And it also is more not just take from the grid or expect the government or, you know, sort of the infrastructure to get you there, but how do we invest and bring more clean energy to the grid? I think that’s a big piece of, I think, what companies can do at the speed at which we are all moving is, how do we take these sort of bigger picture systems problems and embrace them and solve them?

So one is big, one is small. is generation of clean electricity, and the other is grid, and how do you solve the grid problems? So that’s the infrastructure of AI. Then using AI, I think that’s going to some of the other things, Professor, you were saying is, we look at how we could use AI to drive our operations more efficiently. It’s very boring pieces. It’s not really shiny superstar things, but a lot of the impact comes just in general. I look at water taps, and I remember the amount of leakages we have on water taps, the amount of electricity wires that are not connected, just the inefficient use of resources is a big one, and how can we use AI to sort of optimize, whether it’s optimizing within the use of our chips, optimizing the grid, optimizing which applications run from where.

That’s a big part of our strategy. And then the third piece is, what do you use AI, and how do you use it for climate? Now, clearly, our business is information and search, but which means we also have access to a lot of data. And so one of the ways we consider, as what you can do in AI is, how do you use these large data sets? A, find a way to open source them, encourage different use of them, but also incubate certain initiatives that can help to show the light to others. So Earth AI is a big one in which we you’ve got satellite images, you’ve got weather data, you’ve got all of these big chunks of information that we can put out there.

And then there’s an application layer, which I think is of interest to you in terms of resiliency or mitigation. So one of the things, you know, which you probably haven’t heard of as much is Flood Hub. So we have a lot of information put out there as to flood risks of different region, which then other companies can use for whatever products they’re launching, whether it’s insurance, whether it’s real estate, fire sat, wildfire risks. How do you do prediction around it? Something all utilities companies, especially from in California, where I’m based, is we’re very passionate about using that for prediction. I can go on about the list of sort of what data can be used and how it can be leveraged.

The thing I think I’m going to go back to the, you know, crux of what you had in this is. is we’re in the timeframe of two hockey sticks. One is the impact on emissions, and I completely appreciate that the tech companies, hyperscalers, data centers are at a scale contributing to it, which we want to obviously help mitigate or replace with clean electricity as much as we can. And then the other is, how do you use the innovation curve on this? And I think we’ve just scratched the surface. And there’s, of course, like, you know, there’ll be trials and errors, but the surface around how do we democratize data, how do we encourage innovation, and how do we scale it very quickly?

Because I think those are the three, the trifecta of how do you drive this change? And so one of the ways I’ll end with saying I’m super proud of what we’ve done this week, trying to bridge those two gaps is we’re working with the Principal Scientific Advisory of the Government of India to launch a Google Center of Climate Tech. We call it Climate Tech because it does, those are the two hockey sticks that you’re trying to get in, right? The tech scale and the climate impact. and we are our goal is to encourage academic research but research that is actionable so five pilots first of all kinds and how you can scale um and there’s a lot of uh you know focus already on electricity so we are trying to do the non -electricity pieces in that which is around low carbon steel low carbon materials built environments um low carbon sustainable aviation fuel and then the biggest one we don’t talk we talk about what i think is a big lever across everything is green skills you need to embed this sort of a thinking which is green climate first across every domain and how can we encourage that in in india and especially the tier two cities so super excited about those two hockey sticks and how we’ve as a company can bridge those gaps

Spencer Low

Intensity of what is produced in this part of the world actually is really important globally. But what’s really distinctive about APAC is actually the third major topic, which is livelihoods. As I mentioned, this is the part of the world which has a lot of developmental ambitions, and livelihoods are key. So my colleague touched on, Vushali touched on Chapter 3, power systems. Actually, I would like to touch on agriculture and food systems, which is your Chapter 4. So agriculture and other land use is actually the largest employment sector in the Asia -Pacific. I mean, I believe in India it’s about 46 % of jobs. And for the region, it’s the largest sector, about the same as the next two sectors added together, which are actually manufacturing and wholesale and retail trade.

Add those together, you get the same number of jobs as in agriculture. Now, over 80%. 80 % of farms around the world, and especially in India and the rest of the global south, are smallholder farms. farmers. And that creates an issue because a lot of the technology for agriculture is developed for large commercial farms, satellite imagery, et cetera. So this is one example I’d just like to delve into in terms of what Google is doing to contribute to the data, the digital public goods that Rushali spoke of. So if you want to use satellite imagery and actually understand agriculture so you can do things with it, you need to find the boundaries of your individual farms. That’s your individual unit and often less than two hectares, if not smaller.

And so you can do that with people poring over maps or satellite imagery, but that’s not scalable. But this is a really interesting problem for AI. And so for those of you who’d like to know more about this, there’s actually an exhibit at the Expo at the Google Pavilion. But this is what we call agricultural landscape understanding and agricultural monitoring and event detection. So we’ve trained AI to actually digitally enhance the environment. The field boundary. and you can say, well, that’s interesting because you can zoom into India and look at the Indo -Gangetic plain and see all the field boundaries. We’ve also trained the model to distinguish what crops are being grown through multispectral imagery.

And with that, we can detect events like tillage, sowing, harvest, et cetera. And all this data is now available. It is part of the Krishi DSS. So it’s contributing to the digital public infrastructure of the Indian government through the Ministry of Agriculture, through state governments, for example, like that of Telangana and the ADEX system. And what this does is it allows NGOs, government bodies, et cetera, to actually give advice to farmers because you now understand what’s going on on the ground, which is a critical driver for mitigation benefits, but also adaptation as actually the planting and the growing of crops. So we’re seeing best practices for planting. and what to plant is actually changing over time with climate change.

So do find out more at the pavilion. But one thing that I like to double -click on, as we say, is actually the innovation part of it. This digital public infrastructure is only helpful if it can be really used. And it’s not just governments and NGOs. It’s also startups. They’re innovating and finding new ways of using this information. So companies like Carbon Farm, they’re in France. They’re using this data. Varaha, which is a social startup, entrepreneurship. But also, Wadwani AI is another startup that we are supporting as well in terms of driving innovation in the agricultural space. So this is really all going to be accelerated through the use of AI, and we’re very excited to contribute to that.

Thanks.

Uday Khemka

Wonderful. You can see – I’m going to just grab one of these. Thank you. So, hello. Yeah, you can see that Google represents the convergence of the two themes we were talking about. And I think you have a wonderful web. At least I’ve seen access to materials about your sustainability strategy online. So if people want to know more, I’m sure they can go there.

Vrushali Gaud

Yeah, I’ll make a plug. Website, sustainabilitygoogle .com. It has all of our information. And the expo booth has all of our information. So thank you very, very much.

Uday Khemka

Now, we are inevitably vastly behind schedule as we are with climate change. However, we’re going to keep going with great focus. And we’re going to turn to the energy and power sector. Now, this is a bit embarrassing because we have to do a little bit of a switchover of people. But we don’t have time to put up the new names here. So we’re just going to announce them and listen with great attention. So we have two fantastic speakers from the energy sector, which, as you know, is one, if not the most important sector. Do you want to come closer, Nalim? And we can just be together here. I would ask, obviously the decarbonization of the energy sector is absolutely critical without that nothing happens so I’m going to hand over straight away first to you if I may Nalin, to set the stage a little bit and what you’re trying to do with Climate Collective and UNESA and then Dan more specifically to what you’re up to so over to both of you and obviously introduce yourself sorry I haven’t done it for you

Nalin Agarwal

no, thank you I understand we are short on time so I’ll keep very brief I’m Nalin Agarwal, one of the founding partners of the Climate Collective I think we’ll have the slides up soon great, so today I’m just going to talk about very quickly a program that we’ve been running for 6 years and where we partnered with Graylon to really drive decarbonization and grid modernization starting with India but across the global south so if I can move on who’s operating that? let me go there to you I’ll do it. Okay, just quick snapshots. We are an ESO enterprise support organization, largest in the global south, about 1 ,500 startups supported. Key partnerships, UNESA is a key one. I don’t want to spend too much time here, but that’s what we’re going to spend some time on as well.

We do a lot of work in AI, in power but beyond. So next week we’re doing the Delhi Climate Innovation Week. In fact, Google is a sponsor and partner there, and of course, Grail is as well. But happy to chat about this later. Here’s what we’re trying to do. I think what’s happening is that a lot of the challenges on renewables are being solved. They will be solved. I think one of the increasing recognitions is that the grid is a key bottleneck now, and we need to really work on grid transformation. That includes both decarbonization and modernization. So that’s what we’re working towards. So we work with utilities. There’s about 22 of those that we’ve worked with so far.

Work on a problem statement approach. Get startups to apply. Select startups. Get them to create business cases and pilot plans and eventually lead to pilots, right? So there’s 22 utilities that have participated and have led to about 20 pilots, a subset of which have become large deployments, right? So it’s a very unique program in the global south, actually. It’s the only one, right? High conversion ratio, so about 30 % of the pilots that have been proposed have come in. Key partners, I mean, 22 utilities and all the people that are working in power sector reform are part of this program. I mean, again, I won’t spend too much time, but there’s a lot of this information available online. All the startups that are vetted, ready to deploy, are available for utilities to engage with.

we have a bunch of case studies also but the key point is this we are now developing this along with Grail into a global AI for power innovation platform which has three components the open innovation program which is electron wipe on the top there is the knowledge hub which is basically a peer sharing platform where we do convenings co -located at COPS at climate weeks etc and then there is an online solution database of pre -wetted solutions. I’ll stop there and hand it over to Dan

Dan Travers

Thanks. Thanks Nalan I’m going to stand up too because I like to stand up and talk so my name is Dan Travers I’m from Open Climate Fix we are a startup doing AI for grid. I’m going to dive a little bit into the grid area which has been talked about a bit in order to get to net zero we need to electrify we need to green the grid and we need to electrify everything the grid of the past had Usually in each country there was tens of generators and the grid operator would know those people on a first -name basis and they would ring them up and tell them when to turn up and down.

We’ve now got millions of generators with solar panels, wind turbines everywhere. The grid of the past had variability from just demand. We’ve now got variability from demand and the wind speed and the clouds, right, so three sources of variability. The grid of the past had a normal demand that we understood well. We’ve now got data centres, we’ve got EVs, we’ve got batteries, we’ve got AC, so the demand is changing shape incredibly. How are you possibly going to address this balancing of this grid with a bunch of people in a room, right? You need AI solutions. You need a highly digital grid. You need something which can schedule and marshal all of these assets in a digital at sort of AI speed.

So… That’s really important. And why is it important… It’s important because if we don’t do it, we’ll have blackouts, and if we don’t do it, we’ll have costs increasing because the way that grid operators are currently dealing with this challenge is they’re actually scheduling a lot of backup generation. It’s usually gas -fired generation. It’s very expensive, so bills are going up. And if you look around what’s happening now, there’s a push back against the green revolution. If we don’t address these problems, we’re going to have a democratic pushback, and we will have a reversal. So AI solutions can really help us in fighting the battle for hearts and minds as well as the actual physical battle.

So myself, I came from sort of banking tech space. Jack, my co -founder, came from Google DeepMind, who the name keeps coming up. We both saw there was a big gap between the amazing tech that was available in some of these industries and grid operators and the electricity industry, which is by nature very risk -averse. It has to be worried about things failing all the time. So we saw the gap between those two, and we formed Open Climate Fix to really try and address that gap. to bridge that, to take sort of moonshot ideas and actually build a rocket ship that is going to fly to the moon and actually implement something and give data to researchers.

The company’s non -profit, we’re open source, and that’s about the scaling, which I think is a key part of the title of this talk. So we’ve built the best solar forecast in the UK, we think, by about 20 % or 30%, like quite a long way. We now want to take that, we are starting to take that to India. We’re working with Adani, we’re working with Rajasthan Grid Operator, and with a combination of open source plus commercial sort of expansion, we see the AI tools as super transferable across grids. So I’m really excited that we can take tools from one grid and apply them to all the grids in the world and use AI to solve climate change.

Thank you.

Uday Khemka

Thank you so much. You can imagine if we had more time, we would have had a panel on the built environment, a panel on industrial decarbon, a panel on transportation. We don’t have the time. But thank you for that fantastic presentation. A couple of interventions. Now we turn to the last segment. We have three very distinguished institutions with us, all involved at the strategic level. with Grail and with this process. And I’ll start, Ankur, with you at McKinsey, who have been close partners.

Ankur Puri

Thanks a lot. Another race against time. So what would I like to… I should say that Sean went out of the room and negotiated five minutes more for us in the room. So… Okay. So, while the slides come up, it’s firstly a privilege to be here. And thank you for the opportunity for McKinsey to be part of the journey that you are leading, Uday. And thank you all for being here and shaping this in your own special way, at your own scale. I’m Ankur. I’m a partner based out of our India office. I lead Quantum Black in India, which is our AI team. And I work across sectors, because that’s really… a lot of fun.

And part of my work has been in energy. Part of my work has been in the built environment. Thank you. but I’m representing the team which is quite global that has had the privilege to work with the GRAIL effort. So I’d like to just talk about how this little effort with us is shaping the larger movement that GRAIL represents. So everybody’s talking about the impact of AI, so I’m not going to talk more about that, but the promise of AI, let’s just be clear. I think the way large global efforts have sort of found shape is to focus around a few challenges. So one of the big pieces that the GRAIL work has been about is shaping these four challenges and articulating them.

They’re about operational improvement in our current way of working. Big consulting words, strategic intelligence and foresight, basically better planning. Okay, build things better. Transformation, innovation. So can we do new things that don’t exist? That will help the future. And the last one is autonomous operations, which is essentially do you do current operations in a very different way? Use drones instead of people to go see how the wiring is in a large electric plant. and create more impact. Several of the examples you heard about will fit into this across energy -built environment materials and this can keep expanding food systems perhaps. Now, there’s a huge amount of work going on in just collecting the knowledge on each of those challenges.

Then you think about those fields of play, the energy -built environment. Within that, there are stakeholders. So for each stakeholder, what’s relevant? And then for each stakeholder, let’s say if you talk about system operators here as an example, there’s network planning is a domain to think about. Asset management is a domain to think about. Delivery is a domain to think about. Field force execution. Think of this as you’re now bringing in the language of the industry into this knowledge base so that if someone manages a power plant, they’ll be like, okay, what’s my library of things I need to look at? Tomorrow, that can then connect it to people who are innovating or providing these solutions.

One important gap in the middle is, okay, how valuable is it? each of these ideas when it comes to cost, when it comes to emissions. And the work’s not yet ready to be unveiled, but we are quite privileged to work with the Grail team and, of course, global experts to start to now quantify, both in terms of economic impact, but also in terms of direct emissions impact, what each of these applications could be worth. Because then our scarce resources and limited time can be focused on the most important problems. And I think that’s what’s coming up ahead, and I look forward to all of you pushing the boundary further, and it’s a privilege to be part of this.

Thank you.

Speaker 1

Okay, I will kick off. So as a metaphor for the climate, we’re drastically running out of time, and I can see a clock ticking down in front of me. So I’m Rob. I’m from University College London. 200 years ago, University College London was founded with a… a purpose to drive… change to be impactful and to create useful knowledge. That’s really important for the climate because we no longer have the ability to let knowledge sit on the shelf when it comes to climate. So in 2026 at UCL, the way that we bring our community together is through what we call the Grand Challenges. These are a self -funded, cross -university way of tackling problems that are too complex for any one discipline.

Climate crisis at UCL sits alongside challenges like mental health and well -being and data -empowered societies, and they’re found in all 11 of UCL’s faculties, from engineering to health and arts and humanities. So where does AI come into this? Well, AI at UCL is not seen as a single discipline, but as an enabling layer embedded across the entire institution. It builds on our heritage in AI. We’ve got three Nobel Prizes, we’re the birthplace of Google DeepMind, we’ve got several Nobel Prizes, we’ve got the Nobel Prize for the companies all at unicorn valuations. Four quick examples at UCL at the moment. Starting at home we use our own campus as a living lab we’ve got sensor data from across our estate that forecasts energy demand and detects unusual patterns across UCL’s buildings and we turn that into insights for practical intervention.

Second example we’ve got our spin -out Carbon Re which uses deep reinforcement learning and digital twin optimization to cut fuel use and emissions in energy intensive processes like cement production. Third example is a partnership UCL Center for sustainability and real tech innovation is created in partnership with PGM real estate it links computer science to the built environment and accelerates AI enabled sustainability in the real estate it drives impact on the environment but also value for the real estate investors. And then we’ve got our digital innovation center which uses deep reinforcement learning and digital twin optimization to cut fuel use and emissions in energy intensive processes like cement production. Third example is a partnership UCL’s center for sustainability and real tech innovation is created in partnership with PGM real estate it links computer science to the built environment and accelerates AI enabled sustainability in the real estate it drives impact on the environment but also value for the real estate investors.

And fourth UCL Grand Challenges has supported an inclusive and AI tool that transforms satellite and drone imagery into accessible web -based sea ice classification that’s being used to support safer travel for Inuit communities. Aviation is another frontier for us. It’s a grand challenge in its own right. And in there, we are looking at short -term and long -term interventions. AI is used to create short -term interventions that drive down its impact on the climate, while engineering is undertaking long -term technology transform in electrification and hydrogen propulsion. And finally, for UCL, convening really matters. In April 2025, as Uday mentioned, UCL, along with GRAIL, hosted our International Summit on AI Solutions for Climate Change, exploring sectors like energy and the built environment, and moving from discussions and pilots to deployment and impact.

I’ll finish with a quick call to action, which is that the grand challenges created by the U .S. government have been the greatest challenge for us. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past.

We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of

Adam Sobey

Cheers, and we’re properly into Alex Ferguson overtime now. So hopefully not with the climate change, so I’ll try and leave a little bit of time for Uday. So I’m Adam Sobey, I’m from the Alan Turing Institute. This is the UK’s National AI Institute. We focus on five missions across environment, which is focused on environmental forecasting and climate change, on sustainability, on defence and security, on health, and on foundational research. And as the Director for Sustainability, obviously I think that’s the most important mission, and that’s why I’m here. But we believe that the time for action is now. is literally on fire. We saw fires in the US which have been linked heavily to climate change.

We are seeing droughts in India which is affecting the food and people’s lives. We’re seeing pollution in Southeast Asia which is affecting health. We cannot wait for new fuels for the energy transition to occur. We need to do something immediately starting today. And we believe that AI can play that role. We know this because we’ve, as a part of our institute, have applied AI and data science to shipping and reduced emissions by 18%. We have done this in buildings where we’ve improved HVAC optimisation to reduce emissions by 42%. And we’ve created an underground urban farm that works entirely off renewable energies in the UK, allowing us to grow crops without using any CO2. However, I think we’ve done some relatively impressive things for a relatively small institute.

We’ve done some really impressive things for a relatively small institute. We can’t do this alone. we realised that this is a global problem and the Sustainability Missions chief funder is Lloyd’s Register Foundation which is a global charity heavily focused on the global south and so we think that it’s really important that we work together both within the UK and outside of the UK to solve these problems and that’s why we’re really pleased to be part of Braille to look for global solutions to global problems so thank you very much

Uday Khemka

It’s a tribute to all our speakers that they managed to put extraordinary quality into this ridiculously short time frame. I’ll just end on three words. One word is that we have come through our work together to find hundreds of examples of opportunities where businesses for example can save money or increase revenues, improve their economic value while at the same time massively improving their emissions profiles on the mitigation side. On the adaptation side… to your points. There are many examples from Google to all of your institutions where these technologies are already being deployed to save lives at a big scale. And the last point I’d make, apart from I’d ask you to thank our speakers with a big round of applause, is first to say that again and again you’ve heard one theme coming out of this group, which is radical collaboration.

Work with us to make the difference that we all believe and know can be made through the application of AI solutions to climate change. So maybe we could give our speakers a round of applause. Thank you very, very much.

Related ResourcesKnowledge base sources related to the discussion topics (28)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“The session was not a real panel; there would be no discussion, just rapid “boom‑boom‑boom” updates.”

The knowledge base explicitly states the format was not a real panel and would consist of quick updates without discussion [S8].

Additional Contextmedium

“Uday Khemka framed a “triple challenge” of development, a sustainable planet, and climate‑change mitigation and adaptation.”

Multiple UN General Assembly sources note that speakers emphasized the intertwined importance of climate action, sustainable development, and addressing inequalities, providing broader context for the “triple challenge” framing [S87] and [S89].

Additional Contextmedium

“AI can detect patterns, predict outcomes, optimise processes, and simulate systems to aid climate action.”

The knowledge base highlights AI’s role in optimizing electricity supply and demand, reducing energy waste, and supporting climate-related decision-making, which aligns with the described AI functions [S24].

Additional Contextmedium

“AI emissions are less than 1 % of total greenhouse‑gas emissions and AI‑driven solutions could achieve multi‑gigaton CO₂e reductions.”

Other sources indicate AI could help mitigate 5-10 % of global GHG emissions by 2030 and note rising energy demand from AI models, adding nuance to the emission share and potential impact figures [S97] and [S98].

External Sources (98)
S1
Building Climate-Resilient Systems with AI — – David Sandalow- Spencer Low – Uday Khemka- David Sandalow- Vrushali Gaud- Spencer Low- Adam Sobey- Dan Travers
S2
The reality of science fiction: Behind the scenes of race and technology — ‘Every desireis an endand every endis a desirethenthe end of the worldis a desire of the worldwhat type of end do you de…
S3
Building Climate-Resilient Systems with AI — – David Sandalow- Dan Travers- Nalin Agarwal – David Sandalow- Spencer Low – Uday Khemka- David Sandalow- Adam Sobey …
S4
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S5
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S6
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S7
Building Climate-Resilient Systems with AI — -Vrushali Gaud- Global Director of Climate Operations at Google, leads Google’s decarbonization, water and circularity s…
S8
https://dig.watch/event/india-ai-impact-summit-2026/building-climate-resilient-systems-with-ai — And so that’s data centers. That’s the way you operate that. That’s the networks that feed into all of the applications….
S9
The Innovation Beneath AI: The US-India Partnership powering the AI Era — -Vrushali Gaud- Global Director of Climate Operations at Google
S10
https://dig.watch/event/india-ai-impact-summit-2026/building-climate-resilient-systems-with-ai — Cheers, and we’re properly into Alex Ferguson overtime now. So hopefully not with the climate change, so I’ll try and le…
S11
Building Climate-Resilient Systems with AI — Cheers, and we’re properly into Alex Ferguson overtime now. So hopefully not with the climate change, so I’ll try and le…
S12
AI for Good Impact Awards — – **Dan Travers** – Representative from Open Climate Fix
S13
https://dig.watch/event/india-ai-impact-summit-2026/building-climate-resilient-systems-with-ai — So myself, I came from sort of banking tech space. Jack, my co -founder, came from Google DeepMind, who the name keeps c…
S14
Building Climate-Resilient Systems with AI — Power and Energy Systems: Dan Travers provided compelling insights into grid transformation challenges, explaining how t…
S15
Building Climate-Resilient Systems with AI — – Nalin Agarwal- Ankur Puri
S16
https://dig.watch/event/india-ai-impact-summit-2026/building-climate-resilient-systems-with-ai — And part of my work has been in energy. Part of my work has been in the built environment. Thank you. but I’m representi…
S17
Responsible AI for Shared Prosperity — -Ankur Vora- Chief Strategy Officer and President of the Africa and India Office at the Gates Foundation -Co-Moderator-…
S18
Building Climate-Resilient Systems with AI — -Uday Khemka- Moderator/Host, involved with the Green Artificial Intelligence Learning Network (GRAIL) organization
S19
Building Climate-Resilient Systems with AI — -Nalin Agarwal- Founding partner of Climate Collective, works with UNESA (utilities association), focuses on enterprise …
S20
https://dig.watch/event/india-ai-impact-summit-2026/building-climate-resilient-systems-with-ai — I don’t want to spend too much time here, but that’s what we’re going to spend some time on as well. We do a lot of work…
S21
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Legal and regulatory | Sustainable development | Development Reports consistently identify governance of artificial int…
S22
Survival Tech Harnessing AI to Manage Global Climate Extremes — Low to moderate disagreement level with high convergence on goals but some divergence on methods. The implications are p…
S23
AI and Data Driving India’s Energy Transformation for Climate Solutions — Coming to the electric vehicles also, in mobility transition, the similar challenges are aired. In ISD, we have a separa…
S24
Climate change and Technology implementation | IGF 2023 WS #570 — One argument suggests that the internet and technology can enable innovative solutions by using artificial intelligence …
S25
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S26
Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies — The discussion revealed a striking acceleration in AI adoption across business sectors, with usage rates increasing from…
S27
AI as critical infrastructure for continuity in public services — Human adoption challenges center on fear of replacement, communication gaps, and the need for quality-focused rather tha…
S28
Public-Private Partnerships in Online Content Moderation | IGF 2023 Open Forum #95 — In addition to public-private partnerships, the analysis emphasizes the need for collaboration among the data, tech, and…
S29
Navigating the Double-Edged Sword: ICT’s and AI’s Impact on Energy Consumption, GHG Emissions, and Environmental Sustainability — Antonia Gawel:I mean, I think very much a focus on decarbonization of the power sector is a critical input and a signifi…
S30
ICT and green skills crucial for EU’s climate targets and green economy transition, stakeholders say — Industry representatives and policymakersemphasised that the ICT sector and green digital skills are crucial for the Eur…
S31
Parallel Session A9: Climate Change Adaptation, Resilience-Building and DRR for Ports (continued) — In summary, the refined overview elevates the dialogue surrounding climate and disaster resilience, portraying an insigh…
S32
Green and digital transitions: towards a sustainable future | IGF 2023 WS #147 — Professor Liu shared insights on the key challenges faced by governments in driving sustainable development, emphasising…
S33
GC3B: Mainstreaming cyber resilience and development agenda | IGF 2023 Open Forum #72 — Another noteworthy observation was the importance of collaboration among countries. By working together and sharing thei…
S34
WS #466 AI at a Crossroads Between Sovereignty and Sustainability — Pedro Ivo Ferraz da Silva: Yeah, thank you very much, José Renato, Alexandra, and also other colleagues in the panel. It…
S35
Google’s AI data centre in Saudi Arabia raises climate concerns — Google has announced plans to open a new AI-focused data centre in Saudi Arabia, aligning with Saudi Arabia’s Public Inv…
S36
AI energy demand accelerates while clean power lags — Data centres are driving asharp rise in electricity consumption, putting mounting pressure on power infrastructure that …
S37
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — The discussion acknowledged environmental and social challenges, including impacts from increased electricity generation…
S38
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — Aubra Anthony: Yeah, thanks, Yuping. And, yeah, a very auspicious time, really. I mentioned earlier some of the issues t…
S39
Empowering the Ethical Supply Chain: steps to responsible sourcing and circular economy (Lenovo) — In conclusion, the analysis of the speakers’ perspectives on sustainability and responsible consumption reveals importan…
S40
Making Climate Tech Count — The discussion underscored the urgency of climate action while acknowledging the complexities of transforming global ene…
S41
Building Climate-Resilient Systems with AI — “This is an invitation for radical action -oriented collaboration with all of you.”[1]. “It’s a call for collaboration.”…
S42
78th Session of the UN General Assembly (UNGA 78) — In hisvision statement, Francis affirms that multilateralism offers better chances of finding global consensus to tackle…
S43
Networking Session #50 AI and Environment: Sustainable Development | IGF 2023 — Artificial Intelligence (AI) technologies have the potential to significantly contribute to creating greener cities and …
S44
Navigating the Double-Edged Sword: ICT’s and AI’s Impact on Energy Consumption, GHG Emissions, and Environmental Sustainability — Antonia Gawel:I mean, I think very much a focus on decarbonization of the power sector is a critical input and a signifi…
S45
MIT explores AI solutions to reduce emissions — Rapid growth in AI data centres israising global energy use and emissions, prompting MIT scientists to cut the carbon fo…
S46
AI Meets Cybersecurity Trust Governance &amp; Global Security — The main disagreements center on the role of regulation versus industry pressure, the urgency of action versus deliberat…
S47
How to make AI governance fit for purpose? — Legal and regulatory | Development The speed of AI development creates uncertainty and challenges that exceed current c…
S48
AI Meets Agriculture Building Food Security and Climate Resilien — The collaborative approach involving multiple stakeholders allows solutions to be deployed with confidence across differ…
S49
AI and Data Driving India’s Energy Transformation for Climate Solutions — The expert panel discussion emphasized critical enabling conditions for scaling these solutions beyond pilot projects. K…
S50
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — There was unexpected consensus that fear about AI is widespread across different age groups and demographics, but this f…
S51
UN warns AI poses risks without proper climate oversight — AI can help tackle the climate crisis, butgovernments must regulate itto ensure positive outcomes, says UN climate chief…
S52
AI climate benefits overstated says new civil society report — Environmental groups, including Beyond Fossil Fuels and Stand.earth,have publisheda report challenging claims that AI wi…
S53
Challenging the status quo of AI security — AI technology has two sides: it can enhance security measures and help improve existing security systems, but it also in…
S54
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Thank you. Thank you. Thank you. this infrastructure right now and closing the gap between commitments and capacity. Thi…
S55
Google expands Earth AI for disaster response and environmental monitoring — The US tech giant, Google,has expandedaccess to Earth AI, a platform built on decades of geospatial modelling combined w…
S56
Climate change and Technology implementation | IGF 2023 WS #570 — One argument suggests that the internet and technology can enable innovative solutions by using artificial intelligence …
S57
Survival Tech Harnessing AI to Manage Global Climate Extremes — So if we define why we are creating models, what decision we are going to guide based on that data -to -decision framewo…
S58
AI for agriculture Scaling Intelegence for food and climate resiliance — The policy adopts a government‑led, ecosystem‑driven approach to foster AI solutions for agriculture across Maharashtra….
S59
All hands on deck to connect the next billions | IGF 2023 WS #198 — Additionally, Joe Welch affirms the value of a multilateral, multistakeholder approach. He emphasizes the need for colla…
S60
Building Climate-Resilient Systems with AI — “Grail is an attempt to create… …to create a collaborative network.”[16]. “Going back into Grail, bottom left, the f…
S61
The Purpose of Science / DAVOS 2025 — Collaboration between academic institutions and industry can lead to innovative solutions
S62
Next-Gen Industrial Infrastructure / Davos 2025 — The discussion also touched on the challenges of sustainability, with emphasis on the need for green energy infrastructu…
S63
Parallel Session A9: Climate Change Adaptation, Resilience-Building and DRR for Ports (continued) — It suggests that resilience can only be achieved through collaborative efforts, an imperative for ensuring the sturdines…
S64
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Sustainable development | Infrastructure | Development The moderator emphasized the paradoxical nature of AI technology…
S65
Networking Session #50 AI and Environment: Sustainable Development | IGF 2023 — Artificial intelligence (AI) is improving the ways we live, work and solve problems. It can also help us fight climate c…
S66
(Interactive Dialogue 3) Summit of the Future – General Assembly, 79th session — Jonas Gahr Støre emphasizes the potential of AI and digital tools in addressing climate change. He argues that these tec…
S67
The Innovation Beneath AI: The US-India Partnership powering the AI Era — India Energy Stack implementation enabling peer-to-peer energy trading for data centers to source power from distributed…
S68
Google’s AI data centre in Saudi Arabia raises climate concerns — Google has announced plans to open a new AI-focused data centre in Saudi Arabia, aligning with Saudi Arabia’s Public Inv…
S69
AI and Data Driving India’s Energy Transformation for Climate Solutions — Two detailed case studies demonstrated practical applications of this approach. Arthur Global presented research on heat…
S70
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — . in five years in certain areas, and the households are feeling that pinch. There is an issue of reliability. Grids wer…
S71
Networking Session #60 Risk &amp; impact assessment of AI on human rights &amp; democracy — – David Leslie: Director of Ethics and Responsible Research Innovation at the Alan Turing Institute, Professor of Ethics…
S72
Upskilling for the AI era: Education’s next revolution — The tone is consistently optimistic, motivational, and action-oriented throughout. The speaker maintains an enthusiastic…
S73
Opening of the session — The tone was generally constructive and collaborative, with delegates emphasizing the need for cooperation and shared co…
S74
Friday Opening Ceremony: Summit of the Future Action Days — The overall tone was inspirational, hopeful and energetic. Speakers aimed to motivate and empower youth attendees while …
S75
Opening of the session — This comment provided crucial leadership by acknowledging the difficulty of the remaining negotiations while maintaining…
S76
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Hemant Taneja General Catalyst — The tone is consistently optimistic, inspirational, and forward-looking throughout the speech. The speaker maintains an …
S77
How AI Drives Innovation and Economic Growth — The tone was notably optimistic yet pragmatic, described as representing “hope” rather than the “fear” that characterize…
S78
Session — The tone was primarily analytical and forward-looking, with the speaker presenting evidence-based predictions while ackn…
S79
Regional Leaders Discuss AI-Ready Digital Infrastructure — The discussion maintained a consistently optimistic yet pragmatic tone throughout. Panelists were enthusiastic about AI’…
S80
Safe and Responsible AI at Scale Practical Pathways — The tone was collaborative and solution-oriented, with industry experts and government representatives working together …
S81
AI: Lifting All Boats / DAVOS 2025 — The tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunit…
S82
Parliamentary Closing Closing Remarks and Key Messages From the Parliamentary Track — The discussion maintained a collaborative and constructive tone throughout, characterized by diplomatic language and mut…
S83
Closing remarks – Charting the path forward — The tone throughout was consistently formal, diplomatic, and optimistic. It maintained a collaborative and forward-looki…
S84
Leaders TalkX: Partnership pivot: rethinking cooperation in the digital era — The discussion maintained a professional, collaborative, and forward-looking tone throughout. Despite the moderator’s ac…
S85
Building the Future STPI Global Partnerships &amp; Startup Felicitation 2026 — The tone was consistently optimistic, collaborative, and forward-looking throughout the session. It maintained a formal …
S86
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S87
(Day 5) General Debate – General Assembly, 79th session: afternoon session — Several speakers stressed the importance of addressing climate change, achieving sustainable development goals, and prov…
S88
(Day 3) General Debate – General Assembly, 79th session: afternoon session — Several speakers emphasized the importance of addressing climate change, particularly through financial support for deve…
S89
Opening &amp; Plenary segment: Summit of the Future – General Assembly, 3rd plenary meeting, 79th session — – Importance of addressing climate change, sustainable development, and reducing inequalities
S90
(Day 1) General Debate – General Assembly, 79th session: afternoon session — Sadyr Zhaparov – Kyrgyzstan: Mr. Secretary General, Mr. President, distinguished heads of delegations, ladies and gentl…
S91
 Network Evolution: Challenges and Solutions  — Audience:Not really a question, but just a comment that this workshop is not well designed in terms of timing because th…
S92
(Plenary segment) Summit of the Future – General Assembly, 5th plenary meeting, 79th session — Emomali Rahmon: Excellency Chairperson, Excellency Secretary-General, ladies and gentlemen, I’d like to first of all e…
S93
How to make digital transformation inclusive, responsible and sustainable (United Kingdom) — It aims to align with the wider global agenda, particularly the SDGs. It aims to align with the wider global agenda, pa…
S94
Artificial intelligence (AI) – UN Security Council — During the9821st meetingof the Artificial Intelligence Security Council, a key discussion centered around whether existi…
S95
Will science diplomacy survive? — Science in diplomacyis about using scientific evidence and advice for foreign policy decision-making. In these cases, so…
S96
Embracing AI for Good: Insights and practices — Development | Infrastructure First zero-carbon big data center in Qinghai Province powered by 100% clean energy, base s…
S97
Taking the pulse of the planet — Despite this, AI has the potential to be a formidable force in battling climate change. It can aid in mitigating between…
S98
Is AI the key to nuclear renaissance? — There is a direct correlation between the exponential increase in model parameters and the increase in the computational…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
U
Uday Khemka
3 arguments167 words per minute2437 words871 seconds
Argument 1
Triple‑challenge framing: development, climate, and AI must be tackled together (Uday Khemka)
EXPLANATION
Uday frames the discussion as a triple challenge that links development goals, climate mitigation and adaptation, and the rapid advancement of artificial intelligence. He argues that addressing these three pillars simultaneously is essential for meaningful progress.
EVIDENCE
Uday states that the panel’s “triple challenge … is perhaps the most important challenge any of us will face” and explains it involves “promote development on the one side while dealing with the creation of a sustainable planet and climate change” [11-14]. He also notes that the panel will address both mitigation and adaptation, underscoring the need for integrated action [16].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The triple-challenge framing aligns with reports that place AI governance, climate change and resource management (energy, water) together as top global challenges [S21].
MAJOR DISCUSSION POINT
Triple‑challenge framing
Argument 2
GRAIL network unites academia, industry, NGOs, governments and investors to accelerate AI‑driven climate projects (Uday Khemka)
EXPLANATION
Uday describes the GRAIL (Green Artificial Intelligence Learning Network) as a collaborative platform that brings together diverse stakeholders to scale AI‑based climate solutions. The network aims to connect research, industry, philanthropy and policy to move ideas into implementation quickly.
EVIDENCE
He outlines GRAIL as “a collaborative network of great academic institutions, commercial institutions, AI companies, industrial companies, philanthropic institutions, private sector sustainability networks … bringing them all together with governments” and explains its structure of deal flow, funding and scaling on slides [64-68]. He also mentions the summit that gathered 200 people from 115 organizations, illustrating the network’s breadth [71-73].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Uday’s description of GRAIL aligns with the panel’s mention of collaborating with the GRAIL effort in Building Climate-Resilient Systems with AI [S1].
MAJOR DISCUSSION POINT
GRAIL collaborative network
AGREED WITH
Vrushali Gaud, Nalin Agarwal, Ankur Puri, Spencer Low
Argument 3
Time pressure on climate action demands rapid, radical collaboration and scaling of AI initiatives (Uday Khemka)
EXPLANATION
Uday repeatedly emphasizes the urgency of climate action, likening the limited time for the panel to the limited time we have to address climate change. He calls for radical, action‑oriented collaboration among all participants.
EVIDENCE
He notes “we have very little time in the panel … that’s a good metaphor for the very little time we have to do something about climate change” and later says “we are short on time … this is an invitation for radical action-oriented collaboration” [17-20][28-29].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Uday’s urgency mirrors the record-breaking heat observations and the pressing need for climate action highlighted in the Building Climate-Resilient Systems with AI discussion [S1].
MAJOR DISCUSSION POINT
Urgency and radical collaboration
AGREED WITH
David Sandalow, Adam Sobey
D
David Sandalow
3 arguments189 words per minute2021 words639 seconds
Argument 1
AI can deliver both incremental efficiency gains and transformational breakthroughs, with net climate benefit far outweighing its own emissions (< 1 %) (David Sandalow)
EXPLANATION
David argues that AI offers both small, incremental improvements and large, transformational advances that together provide a net positive climate impact. He stresses that the emissions from AI itself are negligible compared with the potential reductions it can enable.
EVIDENCE
He categorises AI impacts as “incremental gains such as improving efficiency” and “transformational gains … new tech, new materials” and cites that “less than 1 % of greenhouse gas emissions are currently coming from AI” which aligns with the Grantham study and IEA data [147-156].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
While David argues that AI’s net climate benefit exceeds its emissions, the Green AI report points out the substantial energy and carbon costs of large AI models, offering a contrasting perspective on AI’s environmental footprint [S25].
MAJOR DISCUSSION POINT
AI’s net climate benefit
Argument 2
The AI‑Climate report provides a taxonomy and actionable recommendations, urging every climate organisation to create dedicated AI teams (David Sandalow)
EXPLANATION
David presents the AI‑Climate report as a practical guide that organises AI use‑cases into a taxonomy and supplies concrete recommendations. He calls on climate organisations to establish dedicated AI teams to implement these ideas.
EVIDENCE
He mentions that the report contains 17 chapters, each with recommendations, and that it includes primers on AI and climate to help both experts and beginners, urging organisations to “consider opportunities for AI” and to have a dedicated AI team [124-138].
MAJOR DISCUSSION POINT
AI‑Climate report and AI teams
Argument 3
Critical gaps: lack of high‑quality data, skilled personnel and trust hinder AI adoption in climate work (David Sandalow)
EXPLANATION
David identifies three main barriers to effective AI deployment for climate: insufficient data, a shortage of trained experts, and a lack of trust in AI outputs. Overcoming these gaps is essential for scaling AI‑driven climate solutions.
EVIDENCE
He states that “the main barriers to AI’s impact … are a lack of data and a lack of trained personnel” and adds that “trust is essential. People aren’t going to use AI unless they trust it” [158-162].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The identified barriers mirror findings on skill shortages, governance gaps and data silos in AI adoption reported in workshops on AI tools [S26], organizational barriers and data silos [S27], and data gaps highlighted in public-private partnership discussions [S28].
MAJOR DISCUSSION POINT
Barriers to AI adoption
AGREED WITH
Vrushali Gaud, Spencer Low, Nalin Agarwal
V
Vrushali Gaud
4 arguments203 words per minute1354 words398 seconds
Argument 1
Deploying AI to optimise internal operations—energy use, water, grid routing—creates measurable emissions reductions (Vrushali Gaud)
EXPLANATION
Vrushali explains that Google applies AI to improve efficiency across its own operations, such as reducing water leaks, optimizing electricity use, and managing grid routing. These internal optimisations translate into tangible emissions cuts.
EVIDENCE
She describes using AI to “optimize … water taps, electricity wires that are not connected, optimizing the grid, optimizing which applications run from where” and notes that these efficiency gains are a major part of Google’s climate strategy [284-286].
MAJOR DISCUSSION POINT
AI‑driven internal optimisation
AGREED WITH
David Sandalow, Dan Travers, Ankur Puri
Argument 2
Google’s Climate Tech Center in India and open‑source data assets (Earth AI, Flood Hub) foster public‑good AI research and deployment (Vrushali Gaud)
EXPLANATION
Vrushali highlights Google’s initiative to launch a Climate Tech Center in India, which will support academic research and open‑source data platforms like Earth AI and Flood Hub. These resources aim to democratise climate data and spur innovation.
EVIDENCE
She mentions “we’re working with the Principal Scientific Advisory of the Government of India to launch a Google Center of Climate Tech” and describes Earth AI (satellite images, weather data) and Flood Hub (flood risk information) as public data assets [303-304][290-295].
MAJOR DISCUSSION POINT
Climate Tech Center and open data
Argument 3
Data‑center decarbonisation, clean‑energy procurement and AI‑optimised resource use across Google’s operations (Vrushali Gaud)
EXPLANATION
Vrushali outlines Google’s strategy to decarbonise its data centres, secure carbon‑free electricity, and use AI to optimise resource consumption throughout the company’s infrastructure. This holistic approach targets emissions from both the physical and digital layers of Google’s business.
EVIDENCE
She discusses data-center emissions, the goal of “carbon-free energy”, and the need to “operationalise” data-center location, community impact and clean-energy procurement, noting that AI helps optimise chips, grid routing and resource use [260-277][284-286].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Google’s focus on data-center decarbonisation and AI-optimised resource use is echoed in discussions of data-center climate operations [S8] and the broader need for AI-driven power-sector decarbonisation [S29].
MAJOR DISCUSSION POINT
Google’s data‑center and operations strategy
Argument 4
Need for green skills development and inclusive training to embed climate‑first thinking across all domains (Vrushali Gaud)
EXPLANATION
Vrushali stresses the importance of building “green skills” so that climate considerations become integral to every sector, especially in emerging economies and tier‑two cities. Training and inclusive education are presented as essential levers for systemic change.
EVIDENCE
She says the goal is “to embed this sort of a thinking which is green climate first across every domain and how can we encourage that in India and especially the tier two cities” [303-305].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emphasis on green skills matches calls for ICT-related green digital skills in the EU Green Deal [S30] and the identification of green skills as a key lever in Building Climate-Resilient Systems with AI [S1].
MAJOR DISCUSSION POINT
Green skills and training
S
Spencer Low
3 arguments163 words per minute638 words234 seconds
Argument 1
AI‑driven detection, prediction, optimisation and simulation are essential for agriculture, food security and climate resilience (Spencer Low)
EXPLANATION
Spencer argues that AI’s core capabilities—detecting patterns, predicting outcomes, optimizing processes, and simulating scenarios—are crucial for improving agricultural productivity, ensuring food security, and enhancing climate resilience in the Asia‑Pacific region.
EVIDENCE
He explains that AI is used for “agricultural landscape understanding”, detecting field boundaries, classifying crops, and identifying events such as tillage or harvest, which feeds into digital public services for farmers and supports both mitigation and adaptation [322-328][329-332].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Spencer’s claim that AI detection and prediction are vital for agriculture is supported by examples of AI-driven environmental sensing for climate mitigation presented in the IGF workshop on technology implementation [S24].
MAJOR DISCUSSION POINT
AI capabilities for agriculture
AGREED WITH
David Sandalow, Adam Sobey
Argument 2
Building a digital public infrastructure for agriculture enables governments, NGOs and startups to deliver climate‑smart services (Spencer Low)
EXPLANATION
Spencer describes a digital public good that aggregates satellite and sensor data, making it accessible to governments, NGOs, and private innovators. This infrastructure underpins climate‑smart advisory services for smallholder farmers.
EVIDENCE
He notes that the data is part of “Krishi DSS”, shared with Indian ministries and state governments, allowing NGOs and startups to give advice to farmers and improve planting practices in response to climate change [329-332][333-338].
MAJOR DISCUSSION POINT
Digital public infrastructure for agriculture
AGREED WITH
Uday Khemka, Vrushali Gaud, Nalin Agarwal, Ankur Puri
Argument 3
AI for precise farm‑boundary mapping, crop classification and event detection to support smallholder farmers in Asia‑Pacific (Spencer Low)
EXPLANATION
Spencer details how AI models can automatically delineate individual farm plots, identify the crops grown, and detect key agricultural events, thereby scaling services that were previously manual and unscalable.
EVIDENCE
He describes training AI to “digitally enhance the environment … field boundary” and to distinguish crops via multispectral imagery, detecting tillage, sowing and harvest, with the outputs made available through public platforms [322-328].
MAJOR DISCUSSION POINT
AI‑enabled farm mapping
D
Dan Travers
4 arguments192 words per minute614 words191 seconds
Argument 1
AI‑enabled grid forecasting and real‑time dispatch are vital to balance renewable generation and avoid costly backup generation (Dan Travers)
EXPLANATION
Dan emphasizes that modern grids, with millions of distributed generators, need AI‑driven forecasting and real‑time dispatch to manage variability from renewables and demand. Without such tools, reliance on expensive backup generation would increase.
EVIDENCE
He outlines the shift from a few generators to “millions of generators” and the resulting three sources of variability, stating that “you need AI solutions … to schedule and marshal all of these assets in a digital … AI speed” to avoid blackouts and rising costs [393-401].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity of AI for real-time grid forecasting and dispatch aligns with the importance of AI in power-sector decarbonisation described in discussions of grid balancing [S29] and collaborative approaches noted in public-private partnership analyses [S28].
MAJOR DISCUSSION POINT
AI for grid balancing
AGREED WITH
Vrushali Gaud, David Sandalow, Ankur Puri
Argument 2
Open Climate Fix offers open‑source grid‑forecasting tools and collaborates with commercial partners to spread solutions worldwide (Dan Travers)
EXPLANATION
Dan presents Open Climate Fix as an open‑source, non‑profit initiative that provides high‑accuracy solar forecasting tools, initially for the UK and now expanding to India, partnering with commercial entities to ensure scalability.
EVIDENCE
He states that they have built “the best solar forecast in the UK … by about 20-30 %” and are now taking it to India, working with Adani and Rajasthan Grid Operator, leveraging open-source and commercial expansion to transfer tools globally [416-420].
MAJOR DISCUSSION POINT
Open‑source grid forecasting
Argument 3
Advanced solar‑forecasting models and open‑source tools that can be transferred across national grids (Dan Travers)
EXPLANATION
Dan highlights the portability of AI‑based solar forecasting models, asserting that tools developed for one grid can be adapted for others, accelerating global decarbonisation efforts.
EVIDENCE
He mentions the solar forecast’s performance improvement and the intention to deploy it in India, illustrating the cross-grid applicability of the technology [417-420].
MAJOR DISCUSSION POINT
Transferable solar forecasting
Argument 4
Real‑time AI deployment can introduce security and safety risks; careful governance of generative AI is needed (Dan Travers)
EXPLANATION
Dan warns that using AI in real‑time grid operations may create new security and safety vulnerabilities, especially with generative AI, and calls for stringent governance to mitigate these risks.
EVIDENCE
He notes that “using AI in real-time operations can cause real security and safety risks” and stresses the need for caution with generative AI in this context [206-208].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Concerns about security and safety risks of real-time AI echo the governance and risk considerations highlighted in workshops on AI tools and governance frameworks [S26] and the organizational barriers discussed in adoption-risk studies [S27].
MAJOR DISCUSSION POINT
AI security and safety risks
A
Ankur Puri
4 arguments168 words per minute618 words219 seconds
Argument 1
Quantifying AI’s economic and emissions impact helps focus scarce resources on the highest‑value climate solutions (Ankur Puri)
EXPLANATION
Ankur argues that measuring both the cost‑benefit and emissions impact of AI use‑cases enables prioritisation of investments, ensuring limited resources target the most effective climate interventions.
EVIDENCE
He explains that McKinsey is “quantifying both in terms of economic impact, but also in terms of direct emissions impact” to guide scarce resource allocation [464-467].
MAJOR DISCUSSION POINT
Impact quantification for prioritisation
Argument 2
McKinsey’s knowledge hub quantifies cost‑benefit and emissions impact of AI use cases, guiding investment decisions (Ankur Puri)
EXPLANATION
Ankur describes McKinsey’s role in creating a knowledge hub that evaluates AI projects across sectors, providing data on economic returns and emission reductions to inform funding choices.
EVIDENCE
He notes that the hub “quantifies both in terms of economic impact, but also in terms of direct emissions impact” and that this helps focus scarce resources on the most important problems [464-467].
MAJOR DISCUSSION POINT
McKinsey knowledge hub
Argument 3
AI applications across energy, built environment, materials and autonomous operations identified by McKinsey’s challenge framework (Ankur Puri)
EXPLANATION
Ankur outlines a framework of four challenge areas—operational improvement, strategic intelligence, transformation, and autonomous operations—through which AI can be applied to sectors such as energy, the built environment, and materials.
EVIDENCE
He lists the four challenges and connects them to sectors like energy, built environment, materials, and food systems, providing examples such as using drones for plant inspections [447-455].
MAJOR DISCUSSION POINT
McKinsey challenge framework
Argument 4
Quantifying economic and emissions impact of AI solutions is essential to prioritize investments (Ankur Puri)
EXPLANATION
Ankur reiterates that systematic measurement of AI’s economic and carbon outcomes is crucial for directing investment toward the most impactful solutions.
EVIDENCE
He repeats that “the work’s not yet ready to be unveiled, but we are privileged … to start to now quantify, both in terms of economic impact, but also in terms of direct emissions impact” [464-467].
MAJOR DISCUSSION POINT
Need for impact quantification
N
Nalin Agarwal
2 arguments162 words per minute496 words182 seconds
Argument 1
Climate Collective’s AI‑for‑Power Innovation Platform links startups with utilities, delivering pilots and large‑scale deployments in the Global South (Nalin Agarwal)
EXPLANATION
Nalin explains that the Climate Collective runs an AI‑for‑Power platform that connects vetted startups with utilities, resulting in pilots and some large‑scale deployments, especially across the Global South.
EVIDENCE
He describes the program’s structure: “22 utilities … about 20 pilots … a subset have become large deployments” and notes the platform includes an open-innovation program, knowledge hub, and solution database [380-393].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The platform’s model of connecting startups with utilities reflects the public-private partnership and data-collaboration themes discussed in the IGF public-private partnership forum [S28] and the US-India AI partnership aimed at scaling solutions [S9].
MAJOR DISCUSSION POINT
AI‑for‑Power platform
Argument 2
AI‑driven grid modernization, pilot programmes with 22 utilities and a global startup pipeline to overcome grid bottlenecks (Nalin Agarwal)
EXPLANATION
Nalin highlights the need to modernise the electricity grid, noting that the Climate Collective’s program works with 22 utilities to run pilots that address grid bottlenecks, leveraging AI‑driven solutions.
EVIDENCE
He states that “the grid is a key bottleneck now” and outlines the process of startups applying, being selected, creating business cases and pilots, leading to large deployments, with a high conversion rate of about 30 % [376-388].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The focus on grid modernization and AI-driven pilots aligns with the broader discussion of AI’s role in decarbonising power grids and addressing bottlenecks [S29] and the need for collaborative data-driven solutions [S28].
MAJOR DISCUSSION POINT
Grid modernization pilots
A
Adam Sobey
3 arguments162 words per minute346 words127 seconds
Argument 1
Proven AI applications have cut emissions in shipping (‑18 %), building HVAC (‑42 %) and enabled renewable‑powered urban farming (Adam Sobey)
EXPLANATION
Adam cites concrete examples where AI has already delivered measurable emissions reductions: an 18 % cut in shipping emissions, a 42 % reduction in building HVAC emissions, and the creation of a renewable‑powered underground urban farm.
EVIDENCE
He reports that “AI and data science … reduced emissions by 18 % in shipping, 42 % in HVAC, and enabled an underground urban farm that runs entirely on renewable energy” [527-530].
MAJOR DISCUSSION POINT
Demonstrated AI emission cuts
Argument 2
The Alan Turing Institute partners globally, leveraging Lloyd’s Register Foundation funding to scale AI climate work beyond the UK (Adam Sobey)
EXPLANATION
Adam describes the Institute’s collaborative model, noting its partnership with the Lloyd’s Register Foundation and its focus on global cooperation to expand AI‑driven climate solutions beyond the UK.
EVIDENCE
He mentions that “the Sustainability Missions chief funder is Lloyd’s Register Foundation … we think it’s important that we work together both within the UK and outside of the UK to solve these problems” [520-527].
MAJOR DISCUSSION POINT
Global partnership and funding
Argument 3
AI‑enhanced shipping routes, building HVAC optimisation and renewable‑powered vertical farms as sector pilots (Adam Sobey)
EXPLANATION
Adam expands on sector‑specific pilots, illustrating how AI improves shipping logistics, optimises building climate control systems, and powers innovative urban agriculture, showcasing the breadth of AI’s climate impact.
EVIDENCE
He details the same three examples-shipping emissions cut, HVAC optimisation, and an underground urban farm powered by renewables-as evidence of sector-specific AI pilots [527-530].
MAJOR DISCUSSION POINT
Sector pilots for AI
S
Speaker 1
3 arguments237 words per minute778 words196 seconds
Argument 1
University‑wide AI initiatives embed climate solutions across disciplines, from energy demand forecasting to satellite‑based sea‑ice monitoring (Speaker 1)
EXPLANATION
Speaker 1 outlines how UCL integrates AI throughout the university, using it for campus energy forecasting, cement‑process optimisation, real‑estate sustainability, and sea‑ice classification, demonstrating interdisciplinary climate action.
EVIDENCE
He lists examples such as “sensor data from across our estate that forecasts energy demand”, “Carbon Re … deep reinforcement learning … cut fuel use in cement production”, “partnership with PGM real estate for AI-enabled sustainability”, and “sea-ice classification from satellite and drone imagery” [480-486][486-492].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UCL’s campus-wide AI climate projects correspond to the cross-institutional AI-climate work highlighted in Building Climate-Resilient Systems with AI [S1] and the integrated governance of AI, climate and development identified in the sustainable AI workshop [S21].
MAJOR DISCUSSION POINT
University‑wide AI climate work
Argument 2
UCL’s Grand Challenges convene interdisciplinary teams, host international summits and translate research into real‑world climate impact (Speaker 1)
EXPLANATION
Speaker 1 describes the Grand Challenges framework at UCL, which brings together cross‑faculty teams to tackle complex problems, including climate, and notes the institution’s role in hosting an international AI‑climate summit.
EVIDENCE
He explains that Grand Challenges are “self-funded, cross-university way of tackling problems” and that they hosted “the International Summit on AI Solutions for Climate Change” in April 2025, linking research to deployment [473-492].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Grand Challenges framework mirrors the cross-faculty collaborative approach described in Building Climate-Resilient Systems with AI [S1] and the emphasis on multi-stakeholder networks in the sustainable AI governance discussion [S21].
MAJOR DISCUSSION POINT
Grand Challenges model
Argument 3
Campus‑wide energy demand forecasting, cement‑process optimisation, real‑estate sustainability and sea‑ice classification projects at UCL (Speaker 1)
EXPLANATION
Speaker 1 provides concrete project examples where AI is applied at UCL: forecasting campus energy use, optimising cement production, improving real‑estate sustainability, and classifying sea‑ice, illustrating tangible climate benefits.
EVIDENCE
He cites the sensor-based energy demand forecasts, Carbon Re’s deep reinforcement learning for cement, the partnership with PGM real estate for AI-enabled sustainability, and the sea-ice classification tool used by Inuit communities [480-486][486-492].
MAJOR DISCUSSION POINT
Specific AI projects at UCL
Agreements
Agreement Points
Urgency of climate action and need for rapid, radical collaboration
Speakers: Uday Khemka, David Sandalow, Adam Sobey
Time pressure on climate action demands rapid, radical collaboration and scaling of AI initiatives (Uday Khemka) The time for action is now; climate impacts are already severe (David Sandalow) We need to do something immediately starting today (Adam Sobey)
All three speakers stress that climate change is happening now and that immediate, coordinated action-especially leveraging AI-is essential, likening the limited time for the panel to the limited time we have to address climate change [17-20][28-29][520-525].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors calls for urgent climate action and multilateral collaboration articulated in UNGA 78 and highlighted in recent climate-tech forums, such as the “Making Climate Tech Count” report and the AI-focused summit urging radical, collaborative action [S40][S41][S42].
AI can deliver both incremental efficiency gains and transformational breakthroughs, with net climate benefit far outweighing its own emissions
Speakers: Uday Khemka, David Sandalow
AI offers incremental gains and transformational breakthroughs that together provide a net positive climate impact (Uday Khemka) AI does have significant potential to contribute to reductions in greenhouse gas emissions; its own emissions are less than 1 % of total GHGs (David Sandalow)
Both speakers argue that AI’s climate benefits (efficiency and breakthrough innovations) exceed the small emissions footprint of AI itself, citing studies that estimate AI-related emissions at under 1 % while potential reductions are several gigatons [62-63][147-156].
POLICY CONTEXT (KNOWLEDGE BASE)
Evidence of AI’s efficiency gains in energy systems and data-centre operations is documented in the IGF AI-environment networking session and MIT’s research on reducing data-centre emissions, supporting the view that AI’s net climate benefit can exceed its own footprint [S43][S45].
Multi‑stakeholder collaborative networks are essential to scale AI‑driven climate solutions
Speakers: Uday Khemka, Vrushali Gaud, Nalin Agarwal, Ankur Puri, Spencer Low
GRAIL network unites academia, industry, NGOs, governments and investors to accelerate AI‑driven climate projects (Uday Khemka) Google’s Climate Tech Center and open‑source data assets foster public‑good AI research and deployment (Vrushali Gaud) Climate Collective’s AI‑for‑Power Innovation Platform links startups with utilities for pilots and large‑scale deployments (Nalin Agarwal) McKinsey’s knowledge hub quantifies economic and emissions impact of AI use cases, guiding investment (Ankur Puri) Building a digital public infrastructure for agriculture enables governments, NGOs and startups to deliver climate‑smart services (Spencer Low)
All speakers describe collaborative structures-GRAIL, Google’s Climate Tech Center, the Climate Collective platform, McKinsey’s knowledge hub, and a digital public agriculture infrastructure-that bring together diverse actors to develop, test and scale AI climate solutions [64-68][71-73][303-304][380-393][464-467][322-332].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple initiatives stress the importance of cross-sector partnerships, from the AI-climate summit invitation to agriculture-AI collaborations and India’s AI-energy scaling roadmap, underscoring a preferred multi-stakeholder model for scaling solutions [S41][S48][S49][S58].
Data availability and skilled personnel are critical barriers; open data and public‑good resources are needed
Speakers: David Sandalow, Vrushali Gaud, Spencer Low, Nalin Agarwal
Critical gaps: lack of high‑quality data, skilled personnel and trust hinder AI adoption in climate work (David Sandalow) Google’s open‑source data assets (Earth AI, Flood Hub) and Climate Tech Center aim to democratise climate data (Vrushali Gaud) A digital public infrastructure aggregates satellite and sensor data for agriculture, making it accessible to governments, NGOs and startups (Spencer Low) The AI‑for‑Power platform provides an online solution database and knowledge hub for utilities and startups (Nalin Agarwal)
Speakers converge on the view that insufficient data and expertise limit AI’s climate impact, and that open-source datasets and shared platforms are essential to overcome these barriers [158-162][290-295][303-304][322-332][380-393].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for standardized data architectures, open interoperable ecosystems, and public-good AI tools appear in India’s AI-energy scaling plan, IGF discussions on real-time environmental data, and Google’s Earth AI release, highlighting data openness as a policy priority [S49][S56][S55].
AI‑driven operational optimisation (energy, water, grid, data centres) can generate measurable emissions reductions
Speakers: Vrushali Gaud, David Sandalow, Dan Travers, Ankur Puri
Deploying AI to optimise internal operations—energy use, water, grid routing—creates measurable emissions reductions (Vrushali Gaud) AI can improve efficiency (incremental gains) such as power flow optimisation and water leak detection (David Sandalow) AI‑enabled grid forecasting and real‑time dispatch are vital to balance renewable generation and avoid costly backup generation (Dan Travers) Operational improvement is one of the four challenge areas where AI can be applied (Ankur Puri)
All four speakers highlight that applying AI to optimise existing systems-whether Google’s internal infrastructure, grid operations, or broader industrial processes-delivers concrete emission cuts and efficiency gains [284-286][260-277][147-151][393-401][447-452].
POLICY CONTEXT (KNOWLEDGE BASE)
Studies presented at the AI-environment networking session, analyses of ICT/AI impacts on power-sector decarbonisation, and MIT’s data-centre efficiency work all provide policy-relevant evidence of measurable emissions cuts from AI-enabled operational optimisation [S43][S44][S45].
AI applications in agriculture and food systems are crucial for mitigation and adaptation
Speakers: Spencer Low, David Sandalow, Adam Sobey
AI‑driven detection, prediction, optimisation and simulation are essential for agriculture, food security and climate resilience (Spencer Low) Food systems are a major source of emissions and AI can improve both mitigation and resilience (David Sandalow) AI‑enhanced shipping, HVAC and renewable‑powered urban farming demonstrate sector‑specific climate benefits (Adam Sobey)
Speakers agree that AI tools-such as farm-boundary mapping, crop classification, and climate-smart farming-are vital to reduce emissions and increase resilience in agriculture and food sectors [322-328][329-332][209-212][527-530].
POLICY CONTEXT (KNOWLEDGE BASE)
Agriculture-AI panels emphasize multi-stakeholder deployment for climate-resilient food systems, and government-led ecosystem approaches in Maharashtra illustrate policy frameworks supporting AI in agriculture [S48][S58].
Similar Viewpoints
Both emphasize that AI’s climate benefits (efficiency and breakthrough innovations) exceed the modest emissions footprint of AI itself, citing studies showing AI‑related emissions are under 1 % while potential reductions are several gigatons [62-63][147-156].
Speakers: Uday Khemka, David Sandalow
AI can deliver both incremental efficiency gains and transformational breakthroughs, with net climate benefit far outweighing its own emissions (Uday Khemka, David Sandalow)
Both stress the importance of open, publicly available climate data platforms to enable governments, NGOs and startups to develop climate‑smart services [290-295][303-304][322-332].
Speakers: Vrushali Gaud, Spencer Low
Google’s open‑source data assets (Earth AI, Flood Hub) and the digital public agriculture infrastructure provide shared climate data for broader use (Vrushali Gaud, Spencer Low)
Both highlight that without trust and proper security safeguards, AI solutions for critical infrastructure (e.g., grid operations) cannot be reliably deployed [161-162][206-208].
Speakers: Dan Travers, David Sandalow
Trust and security are essential for AI adoption in real‑time climate operations (Dan Travers, David Sandalow)
Unexpected Consensus
Both a private‑sector AI leader (Dan Travers) and an academic policy expert (David Sandalow) stress the need for trustworthy, secure AI in real‑time grid operations
Speakers: Dan Travers, David Sandalow
Real‑time AI deployment can cause security and safety risks; careful governance is needed (Dan Travers) Trust is essential; people won’t use AI unless they trust it (David Sandalow)
While Dan focuses on technical security risks and David on user trust, both converge on the necessity of trustworthy AI for critical climate infrastructure-a convergence not obvious given their different roles [161-162][206-208].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for trustworthy AI in grid management is reflected in AI-cybersecurity governance debates and UN statements warning of AI risks without proper oversight, reinforcing the call for secure, reliable AI in energy infrastructure [S44][S46][S51].
Academic institution (Speaker 1) and Google (Vrushali Gaud) both promote open‑source, public‑good AI tools for climate (e.g., satellite data, sea‑ice classification, Earth AI)
Speakers: Speaker 1, Vrushali Gaud
UCL’s Grand Challenges and digital innovation centre provide open AI tools for climate (Speaker 1) Google’s Earth AI and Flood Hub are open data assets for climate research (Vrushali Gaud)
Despite representing academia and a large tech corporation, both emphasize releasing AI-driven datasets and tools as public goods to accelerate climate action, showing an unexpected alignment of open-data philosophy [486-492][290-295][303-304].
POLICY CONTEXT (KNOWLEDGE BASE)
Google’s expansion of Earth AI as a public-good platform exemplifies the push for open-source climate AI tools, a theme also echoed in collaborative AI-climate summits advocating open, shared resources [S55][S41].
Overall Assessment

There is strong consensus among the participants that urgent, collaborative action is required; AI offers significant climate benefits that outweigh its own emissions; multi‑stakeholder networks, open data, and capacity building are critical enablers; and AI can be applied both to operational efficiency and sector‑specific challenges such as agriculture and grid management.

High consensus across most speakers, indicating a shared understanding that coordinated AI‑driven initiatives, supported by open data and capacity development, are essential to accelerate climate mitigation and adaptation. This alignment suggests momentum for concrete joint programmes, funding mechanisms and policy support to scale AI solutions globally.

Differences
Different Viewpoints
Speed of action versus need for security and trust in AI deployment
Speakers: Uday Khemka, Dan Travers, David Sandalow
Urgency and radical collaboration (Uday Khemka) Real‑time AI can cause security and safety risks (Dan Travers) Trust is essential; people won’t use AI unless they trust it (David Sandalow)
Uday stresses that climate action must be rapid, calling the limited panel time a metaphor for the urgency of climate work and urging radical, fast collaboration [17-20][28-29]. Dan warns that deploying AI in real-time grid operations introduces security and safety risks, urging caution especially with generative AI [206-208]. David highlights that lack of trust is a major barrier to AI adoption and that building trust is essential for effective use [161-162]. The speakers therefore diverge on how quickly AI solutions should be rolled out versus the safeguards required before widespread deployment.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between rapid AI deployment and the need for security governance is a central theme in AI-cybersecurity trust discussions and literature on fit-for-purpose AI governance, which call for balanced, non-impulsive action [S46][S47][S50].
Extent of data availability for climate AI projects
Speakers: David Sandalow, Vrushali Gaud, Uday Khemka
Lack of high‑quality data is a main barrier (David Sandalow) Google is releasing open‑source data assets (Earth AI, Flood Hub) and launching a Climate Tech Center (Vrushali Gaud) GRAIL created an online collaborative platform but data gaps remain (Uday Khemka)
David identifies insufficient data as a primary obstacle to AI’s climate impact [158-160]. Vrushali counters by describing Google’s open-source initiatives-Earth AI, Flood Hub, and a Climate Tech Center in India-to democratise climate data and support research [290-295][303-304]. Uday notes the creation of a GRAIL collaborative platform while also acknowledging the need for more data and digital public infrastructure [73-75]. This reflects a disagreement on whether data scarcity is a critical bottleneck or can be rapidly addressed through corporate open-data efforts.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates over data availability are highlighted in India’s AI-energy scaling roadmap, which stresses standardized data architectures, and in IGF sessions that examine challenges of accessing real-time environmental datasets [S49][S56].
Preferred model for scaling AI‑driven climate solutions
Speakers: Dan Travers, Nalin Agarwal, Ankur Puri, Vrushali Gaud
Open‑source, transferable tools for grid forecasting (Dan Travers) Startup‑utility pilot platform linking innovators to utilities (Nalin Agarwal) Quantify economic and emissions impact before scaling (Ankur Puri) Internal corporate optimisation and partnership‑driven deployment (Vrushali Gaud)
Dan promotes open-source solar-forecasting tools that can be transferred across national grids [417-420]. Nalin describes a structured program that matches AI startups with utilities to run pilots and scale deployments [380-393]. Ankur stresses the need to first quantify cost-benefit and emissions impact to prioritize investments [464-467]. Vrushali focuses on using AI internally at Google to optimise resources and on partnerships to drive outcomes [284-286][303-304]. The speakers share the goal of scaling AI for climate but disagree on the most effective pathway-open-source diffusion, structured pilot programs, impact quantification, or corporate-centric optimisation.
POLICY CONTEXT (KNOWLEDGE BASE)
Various scaling models-from multi-stakeholder networks to government-led ecosystem approaches-are contrasted in AI-energy scaling panels and agriculture AI policy frameworks, illustrating ongoing policy deliberations about the optimal model [S49][S58][S41].
Unexpected Differences
Security risks of AI versus optimism about AI’s net climate benefit
Speakers: Dan Travers, David Sandalow
Real‑time AI can cause security and safety risks (Dan Travers) AI’s net climate benefit far outweighs its own emissions (<1 % GHG) (David Sandalow)
While David emphasizes that AI’s emissions are negligible and its climate upside is large, Dan raises concerns that deploying AI in real-time grid operations could introduce new security and safety vulnerabilities, suggesting a more cautious stance than the overall optimism expressed elsewhere [155-156][206-208].
POLICY CONTEXT (KNOWLEDGE BASE)
The dual narrative of AI’s security challenges versus its climate promise appears in UN security warnings, civil-society critiques that question overstated climate benefits, and analyses of AI’s double-edged impact on security and emissions [S46][S51][S52][S53].
Overall Assessment

The panel largely converges on the promise of AI to aid climate mitigation and adaptation, but key tensions arise around the speed of deployment versus the need for security and trust, the perceived scarcity of high‑quality data, and the optimal model for scaling solutions (open‑source, pilot‑based, or impact‑driven). These disagreements are moderate rather than fundamental, indicating that while participants share common goals, they differ on implementation pathways and risk management.

Moderate disagreement; implications suggest that coordinated governance frameworks and clear data‑sharing strategies will be needed to reconcile urgency with safety and to align scaling approaches across corporate, open‑source, and public‑private partnership models.

Partial Agreements
All agree that the power sector must be decarbonised, but differ on the mechanism: Dan stresses real‑time AI tools for grid balancing; Nalin proposes a startup‑utility pilot ecosystem; Ankur calls for rigorous impact quantification before scaling; Uday calls for radical collaboration via GRAIL to accelerate solutions [196-201][393-401][380-393][464-467].
Speakers: Uday Khemka, Dan Travers, Nalin Agarwal, Ankur Puri
Decarbonise the power sector (Uday Khemka) AI‑enabled grid forecasting and real‑time dispatch are vital (Dan Travers) AI‑for‑Power Innovation Platform links startups with utilities (Nalin Agarwal) Quantify AI’s economic and emissions impact to focus resources (Ankur Puri)
All aim to democratise climate data and tools, yet differ in delivery: Vrushali highlights Google’s Earth AI and Flood Hub as open data assets; Spencer describes the Krishi DSS digital public good for agriculture; David proposes a comprehensive AI‑Climate report with primers for both climate and AI audiences. Each proposes a different format for making data and knowledge accessible [290-295][329-332][124-138].
Speakers: Vrushali Gaud, Spencer Low, David Sandalow
Open‑source climate data platforms (Vrushali Gaud) Digital public infrastructure for agriculture (Spencer Low) AI‑Climate report with primers and recommendations (David Sandalow)
Takeaways
Key takeaways
AI is seen as a pivotal lever to address the triple challenge of development, climate mitigation, and adaptation, delivering both incremental efficiency gains and transformational breakthroughs while its own emissions are negligible (< 1 %). Collaboration across academia, industry, NGOs, governments, and investors—exemplified by the GRAIL network, Google’s Climate Tech Center, Climate Collective, and the Alan Turing Institute—is essential to scale AI‑driven climate solutions quickly. Sector‑specific AI opportunities were highlighted: data‑center decarbonisation, clean‑energy procurement, grid forecasting and real‑time dispatch, agricultural mapping and crop monitoring for smallholders, materials and cement optimisation, shipping route optimisation, HVAC optimisation, and satellite‑based climate monitoring. Quantifying the economic and emissions impact of AI use cases (McKinsey, GRAIL taxonomy) is critical to prioritize scarce resources and direct investment toward the highest‑value interventions. Key barriers remain: lack of high‑quality data (especially in the Global South), shortage of skilled personnel, trust in AI outputs, and governance of real‑time or generative AI to avoid security and safety risks. Urgency was repeatedly stressed: climate action timelines are far shorter than the development curve of AI, demanding rapid, radical, and coordinated collaboration.
Resolutions and action items
Launch and invite all participants to join the GRAIL online collaborative platform for co‑creating AI‑climate solutions. Google to operationalise its Climate Tech Center in India, focusing on pilots in electricity, low‑carbon materials, sustainable aviation fuel, and green‑skills training for tier‑2 cities. Climate Collective and Grail to develop a global AI‑for‑Power Innovation Platform (open‑innovation program, knowledge hub, solution database) linking startups with utilities, with pilots already underway in 22 utilities. Open Climate Fix to open‑source its solar‑forecasting tool and expand deployment from the UK to India and other grids, partnering with commercial entities (e.g., Adani, Rajasthan Grid Operator). McKinsey’s Quantum Black team to continue quantifying cost‑benefit and emissions impact of identified AI use cases, feeding the results back into GRAIL’s prioritisation framework. All climate‑focused organisations were urged to create dedicated AI teams and embed AI considerations into mitigation and adaptation strategies. The Alan Turing Institute to leverage Lloyd’s Register Foundation funding to scale AI climate work globally and to partner with other GRAIL participants.
Unresolved issues
Standardised, high‑resolution data sets for many sectors (especially agriculture and grid operations in the Global South) remain insufficient. Developing and scaling green‑skill training programmes to create a workforce capable of deploying AI for climate across diverse regions. Establishing robust governance frameworks for real‑time and generative AI applications to mitigate security, safety, and trust concerns. Quantitative validation of the projected emissions reductions (e.g., 3.5‑5.4 Gt CO₂e saved vs 0.5‑1.4 Gt from data‑centres) is still ongoing. Integration of AI solutions across fragmented industry silos (power, built environment, materials, food systems) lacks a unified implementation roadmap.
Suggested compromises
Accepting a modest increase in AI‑related emissions from data‑centres (0.5‑1.4 Gt CO₂e) in exchange for a much larger net reduction (3.5‑5.4 Gt CO₂e) from AI‑enabled climate actions. Balancing focus between mitigation (efficiency, emissions cuts) and adaptation (resilience, flood mapping) to address both immediate and long‑term climate risks. Combining corporate‑scale clean‑energy procurement with open‑source data and public‑good tools to accelerate both internal decarbonisation and external climate services. Prioritising high‑impact, low‑data‑requirement use cases first while continuing to develop data‑intensive solutions as data infrastructure improves.
Thought Provoking Comments
We are dealing with a triple challenge – development, a sustainable planet, and climate change – and we have very little time, so we must move into action mode.
Frames the entire discussion around three interlinked imperatives and stresses urgency, setting a high‑stakes context for all subsequent contributions.
Established the overarching narrative, prompting speakers to position their solutions as addressing all three pillars and creating a sense of collective urgency that shaped the tone of the panel.
Speaker: Uday Khemka
We talked to the AI community and the industrial sectors and found that *people were not talking to each other* – very few AI experts were focused on downstream climate issues, and vice‑versa.
Identifies a critical systemic barrier – siloed expertise – that explains why existing AI tools have not been leveraged for climate mitigation.
Triggered a shift from abstract enthusiasm to concrete calls for cross‑sector collaboration, leading speakers like David Sandalow and Vrushali Gaud to discuss bridging these gaps.
Speaker: Uday Khemka
The Grantham Institute quantified data‑center emissions at 0.5–1.4 Gt CO₂e, while AI could enable the removal of 3.5–5.4 Gt CO₂e – a clear net benefit.
Provides empirical evidence that counters the common criticism that AI’s own carbon footprint outweighs its benefits.
Gave credibility to the argument for scaling AI, prompting David Sandalow to acknowledge the small share of emissions from AI (<1%) and reinforcing the panel’s pro‑AI stance.
Speaker: Uday Khemka
Based on available evidence, less than 1 % of global GHG emissions currently come from AI, and the main barriers are lack of data, trained personnel, and trust.
Offers a balanced, data‑driven perspective that tempers optimism with realistic challenges, grounding the discussion in practical terms.
Shifted the conversation from hype to actionable priorities, leading participants to emphasize data sharing, capacity building, and trust mechanisms.
Speaker: David Sandalow
AI capabilities can be grouped into four high‑level functions: detect patterns, predict outcomes, optimize processes, and simulate scenarios.
Provides a clear conceptual framework that helps participants map AI tools to specific climate applications across sectors.
Guided later speakers (e.g., Spencer Low on agriculture, Dan Travers on grids) to structure their examples around these functions, deepening the technical discussion.
Speaker: David Sandalow
Using AI in real‑time operations can cause security and safety risks; we must be very careful about generative AI in this context.
Introduces a nuanced risk perspective that balances the earlier optimism about AI’s potential.
Prompted a more cautious tone, influencing Dan Travers to mention the need for reliability in grid AI and encouraging the panel to consider governance and safety measures.
Speaker: David Sandalow
Google is a *full‑stack* company – beyond search we operate data centers, grids, water, and circularity; our climate work is about operationalizing decarbonisation across all these layers.
Expands the definition of corporate climate responsibility from product‑level to infrastructure‑level, illustrating how a tech giant can influence emissions holistically.
Shifted the discussion toward systemic corporate actions, leading to concrete examples like the Google Center of Climate Tech and inspiring other participants to think beyond narrow use‑cases.
Speaker: Vrushali Gaud
We are launching a *Google Center of Climate Tech* in India to incubate five pilots in low‑carbon steel, sustainable aviation fuel, and green skills for tier‑two cities.
Shows a tangible, region‑specific initiative that operationalises the earlier call for collaboration and addresses both the technology and skills gaps.
Served as a turning point that moved the panel from abstract collaboration to concrete programmatic action, prompting interest from other speakers about scaling pilots.
Speaker: Vrushali Gaud
Smallholder farms (often <2 ha) are the majority in the Global South, yet most agri‑tech is built for large farms; we’ve trained AI to map field boundaries and detect crops to enable digital public infrastructure for these farmers.
Highlights a specific, underserved segment and demonstrates how AI can be adapted to local contexts, addressing equity and scalability concerns.
Introduced agriculture as a critical sector, broadening the panel’s focus beyond energy and prompting discussion on data democratization and startup involvement.
Speaker: Spencer Low
The grid now has millions of distributed generators and variable demand; without AI we risk blackouts, higher costs, and democratic push‑back against the green transition.
Articulates the systemic complexity of modern grids and frames AI as essential for both technical reliability and social acceptance of decarbonisation.
Steered the conversation toward grid modernization, reinforcing the urgency expressed earlier and linking technical needs to political risk, which resonated with Ankur Puri’s prioritisation framework.
Speaker: Dan Travers
We have identified four challenge categories – operational improvement, strategic intelligence, transformation, and autonomous operations – and we are now quantifying economic and emissions impact to focus scarce resources on the highest‑value problems.
Provides a strategic prioritisation lens that moves the discussion from idea generation to impact measurement and resource allocation.
Encouraged participants to think about metrics and ROI, influencing later remarks about pilots, scaling, and the need for evidence‑based investment.
Speaker: Ankur Puri
At UCL AI is not a single discipline but an *enabling layer* embedded across all faculties; our Grand Challenges bring together engineers, artists, and health experts to tackle climate together.
Demonstrates a cross‑disciplinary institutional model that operationalises the earlier call for collaboration and shows how AI can be a unifying tool.
Reinforced the theme of interdisciplinary collaboration, inspiring other speakers to consider broader institutional partnerships and adding depth to the conversation about scaling AI for climate.
Speaker: Speaker 1 (UCL)
Overall Assessment

The discussion was shaped by a series of pivotal comments that moved the panel from a high‑level framing of the climate‑AI‑development triple challenge to concrete, sector‑specific applications and strategic frameworks. Uday Khemka’s opening set the urgent, collaborative tone, while his identification of silos and the data‑center emissions balance provided a factual foundation. David Sandalow’s balanced appraisal and capability framework introduced nuance and a practical roadmap, prompting participants to address data, talent, trust, and safety concerns. Corporate and academic leaders (Vrushali Gaud, Spencer Low, Dan Travers, Ankur Puri, and the UCL representative) each contributed concrete initiatives—full‑stack climate action, farmer‑focused AI, grid modernization, impact‑focused prioritisation, and cross‑disciplinary Grand Challenges—that transformed the dialogue into a catalogue of actionable pathways. Collectively, these insightful remarks redirected the conversation from abstract optimism to measurable, collaborative action, highlighting both opportunities and the systemic barriers that must be overcome to realise AI‑driven climate solutions.

Follow-up Questions
How can we build and share data infrastructure in the Global South to enable AI‑driven climate solutions?
All three highlighted the lack of data in the Global South as a major barrier to applying AI for mitigation and adaptation.
Speaker: David Sandalow, Spencer Low, Vrushali Gaud
What safeguards and risk‑mitigation strategies are needed for the use of generative AI in real‑time grid operations?
He warned that generative AI can create security and safety risks in grid control and called for careful safeguards.
Speaker: David Sandalow
How can trust be established in AI tools among climate‑mitigation organisations?
Trust was identified as essential for adoption; without it organisations will not use AI solutions.
Speaker: David Sandalow
What methodologies can be used to quantify the economic and emissions impact of AI applications across sectors?
He noted a gap in valuing AI ideas in terms of cost and emissions, suggesting the need for robust quantification frameworks.
Speaker: Ankur Puri
How can AI models be adapted for smallholder farms and integrated into local decision‑making?
He pointed out that most agri‑AI tools are built for large farms, leaving smallholders underserved.
Speaker: Spencer Low
What approaches can democratize climate data and accelerate AI‑driven innovation at scale?
She asked how to open‑source large datasets and incubate initiatives so that more actors can build on them.
Speaker: Vrushali Gaud
What effective strategies can develop green climate skills in tier‑two Indian cities?
She highlighted the need for skill‑building programmes to embed climate‑first thinking beyond major metros.
Speaker: Vrushali Gaud
What is the net climate impact of AI data centres when considering both their emissions and the emissions reductions they enable?
Uday cited Grantham estimates; David reiterated the need to balance data‑centre GHGs against AI‑driven mitigation benefits.
Speaker: Uday Khemka, David Sandalow
What data standards are needed to enable AI applications in the power sector?
He stressed that standardized, high‑quality data is a prerequisite for AI‑driven grid optimisation.
Speaker: David Sandalow
How transferable are AI grid‑forecasting models across different national grids?
He described success in the UK and plans for India, raising the question of cross‑regional model portability.
Speaker: Dan Travers
What is the impact of AI‑driven flood‑risk platforms (e.g., Flood Hub) on mitigation and adaptation outcomes?
She mentioned Flood Hub as a tool for risk mapping, prompting evaluation of its real‑world effectiveness.
Speaker: Vrushali Gaud
How can AI be applied to low‑carbon steel production, sustainable aviation fuel and other non‑electricity sectors?
She noted these sectors as major levers and asked for AI‑enabled pathways to decarbonise them.
Speaker: Vrushali Gaud
How can AI be used to detect and reduce water‑infrastructure leaks?
She cited water‑tap leakage as a low‑hanging‑fruit for AI optimisation.
Speaker: Vrushali Gaud
How can AI optimise chip usage and data‑centre energy consumption?
She referenced internal Google efforts to improve efficiency of chips and data‑centre operations.
Speaker: Vrushali Gaud
What are best practices for integrating AI into extreme‑weather forecasting and response?
He highlighted AI/ML‑enabled weather prediction as transformational for resilience, needing further study of implementation best practices.
Speaker: David Sandalow

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Indias Digital and Industrial Future with AI

Building Indias Digital and Industrial Future with AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by GSMA highlighted the convergence of AI, telecom and data sovereignty around digital public infrastructure, emphasizing the need for intelligent, programmable networks as core national assets [1][8-10]. Julian Gorman argued that mobile networks are shifting from mere connectivity to trusted layers that shape AI model performance, enable real-time services such as fraud prevention and digital identity, and therefore must be integral to future digital infrastructure [14-16][18-20]. He further stressed that digital sovereignty now requires strategic control over standards and intelligence, not just data localisation, and that global interoperable standards are essential to avoid fragmentation and keep countries connected to the global digital economy [21-24][25-27].


Debashish reinforced this by noting that networks have evolved from voice to intelligent platforms that embed AI for identity verification, fraud mitigation and sovereign data decision-making, and that ensuring trust, interoperability and global compatibility is a key challenge [37-41]. Rahul Vatts illustrated India’s experience, citing the processing of 28 lakh crore rupees through UPI on a billion users supported by a massive connectivity layer of over a million BTSs and 500 lakh km of fiber, and described how OTP/SMS, Aadhaar-enabled payments and telecom-derived credit scores create trust for billions of transactions [51-58][61-62][65-67][69]. A telecom-service-provider representative added that the DPI ecosystem adds contextual enrichment to raw data, offering open APIs that allow banks and other entities to access real-time fraud and authentication signals, thereby turning the network into a governance and resilience layer [82-88][100-108][110-112].


Deepak Maheshwari outlined three tiers of data sovereignty – physical/administrative control for critical state data, citizen-controlled data that may flow internationally, and business data that balances India’s outsourcing role with the need for two-way data flows – and called for active participation in international standard-setting rather than unilateral control [141-148][155-162][164-170][172-179]. Mansi Kedia warned that building public digital infrastructure and private capabilities in silos creates inefficiencies, stifles innovation and weakens trust, and advocated for flexible blueprints and standards that can be adapted across contexts while preserving interoperability [204-209][218-224][226-232]. Rahul then broke down practical sovereignty into four slices – data residency, control-plane localisation, operational control of network software, and jurisdictional exposure to foreign laws – and described Airtel’s sovereign-cloud offering that keeps data within its own network to meet these criteria [235-242][245-252][255-262].


Martin (representing Vodafone Idea) highlighted emerging regulatory frictions such as the need for AI explainability, accountability and clear standards for digital intermediaries, suggesting that industry-wide playbooks and reference frameworks are required to balance security with innovation [279-286][288-295][300-307][310-314]. Deepak emphasized that India’s DPI model is open, royalty-free and supported by diplomatic and research institutions, making it a scalable, adaptable framework for Global South countries without imposing monetisation barriers [318-326][330-338][340-347]. Mansi concluded that India’s extensive DPI experience, from UPI to mobile-data-driven credit scoring, provides concrete evidence for other emerging economies and that ongoing collaboration with multilateral bodies will help evolve standards and blueprints for inclusive digital development [353-360][363-368].


The discussion ended with consensus that achieving AI-enabled, sovereign yet interoperable digital infrastructure will depend on open standards, contextual data enrichment, and coordinated policy frameworks that align national priorities with global innovation ecosystems [22-24][218-224][279-286].


Keypoints


Major discussion points


Telecom networks are evolving from simple connectivity providers to intelligent, AI-enabled platforms that underpin Digital Public Infrastructure (DPI).


Julian highlighted that “today’s mobile networks are becoming intelligent, programmable and trusted layers of the national infrastructure” and are now part of AI model decision-making, fraud prevention and digital identity security [13-18]. Debashish reinforced this evolution, noting that networks are “no longer passive carriers of data” but “intelligent platforms where AI is deployed” [37-40]. Rahul gave concrete examples, describing how OTP/SMS, Aadhaar-enabled payments and real-time fraud indicators rely on the connectivity layer [51-63].


Digital sovereignty now means more than data localisation; it requires strategic control over standards, AI models, and the underlying infrastructure.


Julian framed sovereignty as “strategic control over the infrastructure… the standards, and increasingly, the intelligence that underpins the national digital system” [18-22]. Deepak expanded the concept, distinguishing physical data location, local contextual relevance, and the need for participation in global standard-making rather than unilateral control [140-176]. Rahul added practical dimensions, separating data residency, control-plane localisation, operational sovereignty (software patches) and jurisdictional sovereignty (e.g., US CLOUD Act) [235-242].


Avoiding fragmented or parallel digital infrastructures requires interoperable open standards, APIs and collaborative blueprints.


Julian warned that “fragmentation… slows down” and that “open APIs, harmonised frameworks” are essential for scaling [23-25]. Martin’s question about “parallel digital infrastructure structures” and the risk of duplication underscored this concern [76-78]. Speaker 1 described how TSPs collaborate via open APIs (e.g., FRI, Digital Intelligence Platform) to provide contextual data to banks and other entities [118-124][127-130]. Mansi emphasized that “systems coming together build trust, efficiency and innovation” and that the World Bank favours flexible blueprints that incorporate best-practice standards [204-210][220-226].


Concrete use-cases illustrate how AI-enhanced networks deliver trust, fraud mitigation and financial inclusion.


Rahul detailed Airtel’s role in securing UPI transactions, OTP delivery within 2 ms, and AI-driven spam/ scam detection that adds friction to fraudulent calls [51-63][66-70]. Speaker 1 explained the enrichment of raw call data with contextual signals (e.g., Aadhaar verification vs. call location) to enable real-time fraud decisions [102-110]. These examples show the network acting as a “contributor to governance, resilience and trust” [17].


India’s open, scalable DPI model is positioned as a template for the Global South, combining open protocols with diplomatic outreach.


Deepak argued that India offers an “open protocol” DPI framework without licensing fees, enabling other countries to adopt and adapt it [318-327]. He also highlighted diplomatic mechanisms (e.g., Ministry of External Affairs think-tanks) that support capacity-building [332-334]. Mansi noted that India’s experience with UPI, NPCI and the “Finternet” concept provides evidence for other emerging economies and that multilateral collaborations are already shaping global adoption [353-360].


Overall purpose / goal of the discussion


The session aimed to move beyond high-level rhetoric about AI, telecom and digital sovereignty and to “translate them into direction… identify practical next steps… create space for collaboration” (Julian [28-31]). Participants sought to share India’s DPI experience, surface policy and technical challenges, and outline how global standards and open blueprints can enable inclusive, secure, and interoperable digital ecosystems for both advanced and developing economies.


Overall tone and its evolution


– The conversation opened with a formal, visionary tone-Julian’s welcoming remarks framed the topic as a strategic crossroads [5-9].


– It then shifted to a technical and evidential tone, with detailed statistics, product descriptions, and operational insights from Rahul, the TSP speaker, and Deepak [51-63][95-108][140-150].


– Mid-session, the tone became collaborative and policy-focused, emphasizing standards, avoidance of fragmentation, and the need for multistakeholder governance [23-25][76-78][204-210].


– Towards the end, the tone turned optimistic and diplomatic, highlighting India’s role as a model for the Global South and the potential for international cooperation [318-327][353-360].


– The closing remarks returned to a courteous, appreciative tone, thanking speakers and the audience [377].


Overall, the discussion maintained a constructive and forward-looking atmosphere, moving from high-level vision to concrete examples, then to policy coordination, and finally to global outreach.


Speakers

Rahul Vatts – Chief Regulatory Officer, Airtel; expert on telecom infrastructure, digital payments, fraud mitigation, and AI-enabled services[S1]


Speaker 1 – Unspecified role (appears to be a telecom service-provider/TSP representative discussing DPI context, enrichment, and open APIs)[S2]


Julian Gorman – Head of APAC, GSMA; specialist in intelligent networks, digital public infrastructure and global standards[S5][S6]


Debashish Chakraborty – Moderator, GSMA representative; focuses on convergence of AI, telecom and data sovereignty[S7]


Deepak Maheshwari – Representative, Center for Social and Economic Progress (CSCP); expertise in data sovereignty, AI policy, and digital public infrastructure[S8][S9][S10]


Mansi Kedia – Representative, World Bank; works on digital public infrastructure blueprints, standards and development policy[S11]


Audience – Various participants (e.g., Vijay Agarwal – jewelry manufacturer interested in KYC-embedded wearables; Yuv from Senegal; Professor Charu, Indian Institute of Public Administration; Dr. Nazar)[S12][S13][S14]


Additional speakers:


Martin (Martin Schroeter) – Representative of Vodafone Idea; contributes perspectives on regulatory challenges, AI-driven networks and open-gateway APIs[S2]


Matan – Speaker referenced in the discussion on DPI and data context; specific role or affiliation not provided in the transcript or sources.


Vijay Agarwal – Audience member, jewelry manufacturer proposing KYC-embedded ring concept; raised a question on data embassies.


Mike – Mentioned briefly in the audience Q&A; no further details available.


Ambika – Named in the moderator’s prompt but did not speak; no role or contribution recorded.


Full session reportComprehensive analysis and detailed insights

The session opened with Debashish Chakraborty introducing the theme of “convergence of AI, telecom and data sovereignty all weaved around the digital public infrastructure” and handing over to Julian Gorman, head of APAC GSMA, for the keynote address [1-4]. Julian welcomed the participants and positioned GSMA as the global body that unites the mobile economy to “unlock the power of connectivity so industry and society thrive” [5-7]. He explained that today’s mobile networks are evolving from simple connectivity providers into intelligent, programmable layers that shape AI model performance, enable real-time services and embed governance functions, thereby redefining digital sovereignty as strategic control over infrastructure, standards and the underlying intelligence [13-22]. Julian concluded by stating the session’s purpose: to translate these themes into concrete directions, practical next steps and collaborative space [28-31], before handing back to Debashish [32].


Debashish reinforced Julian’s vision, observing that telecom networks have moved from voice-only services to “the trusted digital infrastructure that we use today underpinning the modern economies” and are now “intelligent platforms where AI is deployed either as an add-on or embedded already into the network” [37-40]. He highlighted that this shift enables digital-identity authentication, fraud mitigation and the exercise of data sovereignty [41-42].


Rahul Vatts, Chief Regulatory Officer of Airtel, illustrated the scale of India’s Digital Public Infrastructure (DPI). In a single month the UPI system processed 28 lakh crore rupees across a billion users, supported by a connectivity layer of over a million base-transceiver stations and 5 million km of fibre [51-54]. He explained that every transaction relies on the network for OTP/SMS delivery within two milliseconds, providing the “layer of trust” essential for Aadhaar-enabled payments and other citizen-centric services [55-63]. Rahul also described Airtel’s AI-driven anti-spam and fraud solutions that add friction to suspicious calls, thereby reinforcing trust in the ecosystem [66-70]. He noted that quantum-secure techniques are already being explored for Aadhaar, signalling early work on next-generation data protection [??].


Debashish then asked Martin (Vodafone Idea) how to ensure that the DPI trust layers being built by mobile network operators complement rather than duplicate the GSMA Open Gateway APIs [76-78]. Martin (Speaker 1) answered that the TSP community already collaborates with the GSMA team on Open Gateway APIs, several of which have now been certified [98-108][110-112][118-124]. He added that TSPs expose contextual enrichment-such as Aadhaar verification versus call-location data-through open APIs like the Fraud-Risk-Indicator (FRI) and the Digital Intelligence Platform, enabling banks and other institutions to generate risk scores in milliseconds for micro-loans [102-110][124-126].


Deepak Maheshwari expanded the discussion on digital sovereignty. He argued that sovereignty in an AI-driven world is not only about data residency but also about “having strategic control over the infrastructure” and the standards that govern it [18-22]. He distinguished three tiers of sovereignty: (i) physical/administrative control for critical state data, (ii) citizen-controlled data that may flow internationally, and (iii) business data that must balance India’s outsourcing role with two-way data flows [140-162]. Deepak warned against “walls” that block two-way traffic and advocated active participation in multistakeholder standard-setting bodies (GSMA, ISO, ITU, IEEE) rather than unilateral control [173-179]. He concluded by invoking the historic “3C” framework-carriage, content and conduct-as a lens to view today’s AI-driven DPI [??].


Mansi Kedia (World Bank) distinguished between “standards” (prescriptive, commercialised) and “blueprints” (flexible, adaptable best-practice collections), arguing that the latter better suit emerging economies while still ensuring interoperability [219-226][227-233]. She referenced the “Finternet” concept being explored with the BIS as evidence of scalable, interoperable digital rails [353-360][363-368].


Rahul returned to the sovereignty theme with a “four-slice” model-data residency, control-plane localisation, operational sovereignty (software patches) and jurisdictional sovereignty (e.g., exposure to the US CLOUD Act)-asserting that true sovereignty requires all four dimensions [235-250]. He advocated a “selective residency” approach: keep critical public-interest data (KYC, health, defence) under domestic control while leveraging hyperscaler efficiencies for non-critical workloads [245-252][255-262].


Martin (Vodafone Idea) raised regulatory frictions, calling for industry-wide referenceable standards or playbooks that would make AI decisions explainable and ensure compliance with emerging digital-intermediary rules [279-286][288-295][300-307][310-314]. He warned that private sovereign-cloud offerings could create parallel DPI layers, fragmenting the ecosystem [239-244][262-270].


Deepak stressed that India’s DPI model is an “open protocol” with no royalty or IP fees, allowing other countries to adopt and adapt it freely [318-327]. He noted that diplomatic channels-such as the Ministry of External Affairs think-tank and the Indian Council of World Affairs-provide soft-power support for capacity-building abroad [332-334].


An audience member, Vijay Agarwal, proposed a novel “ring-based” KYC and medical data vault that would store personal data on the body, encrypt it, and use blockchain-based consent, effectively creating a personal data-embassy [371-374]. Deepak responded affirmatively to the idea of reciprocal data-embassy arrangements [375], while Rahul pointed out that Aadhaar already incorporates masking and that data-embassy concepts are being explored at the governmental level [376-380].


After the Q&A, Debashish recalled that, until about 15 years ago, the IRCTC registration form required a compulsory marital-status field that offered no clear benefit, underscoring how data-collection norms have evolved and informing current debates on data sovereignty [??].


The panel broadly agreed that (i) telecom networks are now AI-enabled public layers delivering trust, fraud mitigation and digital-identity services; (ii) digital sovereignty must extend beyond data localisation to include control-plane, operational and jurisdictional dimensions; (iii) open, interoperable standards or flexible blueprints are essential to avoid duplicated DPI stacks; and (iv) India’s open, scalable DPI model offers a credible template for the Global South [13-22][23-25][318-327][355-362]. Diverging views emerged around the preferred normative instrument (hard global standards versus adaptable blueprints) [22-25][219-226] and the relative priority of AI explainability versus technical-jurisdictional aspects of sovereignty [279-286][236-250].


Key take-aways included: (a) continued development of open APIs that enrich raw network data with contextual signals; (b) regulatory frameworks that address AI explainability, digital-intermediary responsibilities and jurisdictional exposure; (c) promotion of India’s open DPI protocol through diplomatic and multilateral channels; (d) exploration of data-embassy and personal-device data-vault concepts; and (e) creation of sector-specific playbooks that balance security with innovation. Unresolved issues highlighted were the concrete design of jurisdiction-safe sovereign clouds, mechanisms to prevent parallel DPI infrastructures across operators, and detailed governance models for data-embassy arrangements [285-293][255-262][371-380].


In closing, Debashish thanked all speakers and the audience, underscoring the collaborative spirit of the session and the shared commitment to building AI-enabled, sovereign yet interoperable digital public infrastructure [377].


Session transcriptComplete transcript of the session
Debashish Chakraborty

convergence of AI, telecom, and data sovereignty all weaved around the digital public infrastructure. I’m Devish. I represent GSMA. I’ll request Julian Gorman, head of APAC GSMA, to give his keynote address and then we start with the panel discussion. Julian.

Julian Gorman

Good morning, everyone. Warm welcome, distinguished guests, colleagues and partners and speakers who have joined us today. It’s a great honour to actually open this session for GSMA. GSMA, for those who don’t know, is the global organisation uniting the mobile economy, that means mobile operators and the ecosystem, to unlock the power of connectivity so industry and society thrive. And this session really goes to the core of that around intelligent networks, intelligent telecom networks for digital public infrastructure, a topic that sits right at the intersection of where the telecom industry is heading and where national digital public infrastructure is heading. And that’s where we’re being built. Of course. India is really at a pivotal point in its digital journey and a key player in this space.

They’ve been on the digital public infrastructure journey for a lot longer than the rest of us, but over the last decade, we’ve really seen the rise of digital public infrastructure recognised from identity and payments to digital commerce and data empowerment and has shown the world what is possible when scale, innovation and public purpose come together as delivered inclusion, trust, economic impact at a level few countries have achieved. But as we enter this next phase, which is shaped by AI, real -time data and increasingly autonomous systems, we need to ask a fundamental question, and that is what are the role the telecom networks play in this new digital infrastructure? For years, networks were viewed simply as connectivity providers and that view is changing.

Today’s mobile networks are becoming intelligent, programmable and trusted layers of the national infrastructure. and they’re shaping how AI models perform and will perform and how services are optimised at the edge, how fraud is stopped before it happens and how digital identity remains secure in a world of growing complexity. In India, networks already support core DPI functions, identity verification, payments, emergency response and major public service platforms. As AI becomes embedded in these systems, the networks don’t sit back anymore in the background it becomes part of the decision -making fabric providing context and priority for tokens or the critical elements of data which digital public infrastructure information is the predecessor of. Through this, the network becomes a contributor to governance, resilience and trust.

And that brings us to the second major theme of the day, digital sovereignty. In an AI -driven world, sovereignty is no longer just about where the data is stored, it’s about having strategic control over the infrastructure. The key to this is the ability to manage the infrastructure the standards, and increasingly, the intelligence that underpins the national digital system. Countries want to know, how do we build AI -enabled public infrastructure that is safe, interoperable, and aligned with national priorities, while still remaining connected and interoperable with global markets and innovation? This is exactly where global standards matter. Fragmentation, whether technical, regulatory, or geopolitical, slows down. Interoperability, open APIs, harmonized frameworks, help countries scale confidently, while staying part of the global digital economy.

India is uniquely positioned to show how this balance can be achieved. Open, yet sovereign. Scalable, yet secure. National in ambition, but global in design. And our goal today is not just to talk about these themes, it is to translate them into direction. To identify practical next steps. To create space for collaboration. and to learn from India’s experience in ways that matter for economies that are at every stage of digital development. So I’m looking forward to the discussion and to the concrete actions we can shape together and I look forward to very big contributions from the panel today and also to hear more from the audience later. So thank you. Debashish, I hand over to you.

Debashish Chakraborty

Thank you, Julian. Thanks for the opening remarks. Am I audible? Looks like yes. So let’s begin. We have a fantastic panel here of experts. So let’s start with this discussion. So what we have seen over the past few decades that telecom networks have evolved. They have evolved a lot from just enabling voice to powering mobile broadband to becoming the trusted digital infrastructure that we use today underpinning the modern economies, right? So today’s network are no longer passive carriers of data. They are becoming intelligent platforms where AI is deployed either as an add -on or embedded already into the network, where digital identity is authenticated, where fraud is mitigated, where sovereignty over data and decision -making is increasingly exercised.

As India advances in digital public infrastructure and its AI ambitions, the key is how we ensure these systems remain trusted, interoperable, and globally compatible while avoiding fragmentation and duplication. And that is the conversation which we aim to explore today. Let me start with Rahul, who is the Chief Regulatory Officer for Airtel. Rahul, we often talk about digital public infrastructure as applications and platforms, but at the foundation sits the network which you drive. So from Airtel’s perspective, what makes the telecom networks uniquely positioned in the digital world? It is as India’s trusted infrastructure layer. beyond just connectivity.

Rahul Vatts

Thank you, Devashish and GSMA for this particular session. It’s a session of particular interest to me as a user in the digital ecosystem and of course to the entire digital fraternity because if there’s one thing which India is doing great, it’s really the digital public infrastructure to the extent that President Marcon yesterday actually mentioned about it. It’s the biggest export which India has done across the globe. So let’s talk about what’s really happening today. If you look at the data of January alone, India transacted 28 lakh crores rupees of money through its UPI infrastructure. It was spread across a billion people and all this is happening on what? On what is the foundation layer? It is the foundation layer, is the connectivity layer.

and so for us at Airtel this is not just a plumbing job it’s the very heart of the foundation we are laying for trust how are people transacting this much amount of money because they trust the ecosystem to which they really want to do this and so beneath this layer is really the connectivity which has powered the country look at the numbers of connectivity in a country like ours we have got more than a million BTSs powering the entire country we have got more than 500 lakh kilometers of fiber running across in various shapes and forms across the country we have got as an industry more than a thousand edge and large hyperscale data centers now can you imagine each of the mobile switching center carries a load of at least 30 to 50 million people sometimes or even larger at times so this is the scale with which the infrastructure is becoming the layer with which we are operating what is all this enabling let’s look at that that.

What it is enabling is every transaction you do, there is a OTP or SMS which is coming out, right? So this OTP and this SMS is what? It’s a layer of trust that people are trusting the message which they are trying to get on their system. Let’s look at the Aadhaar enabled payment system. More than 500 million rupees done on that alone. And how is that enabled? Through a connectivity which is happening in less than 2 milliseconds. So this again is an example of that same ecosystem. Let’s go further. What’s really happening and how are we doing? I don’t know how many of you actually visited the Airtel stall. We have got solutions where banks can use the telco indicators to make a smart choice about giving you loans, right?

We rank a person’s history based on a low risk or a high risk which enables the bank to be able to take a smart decision in a matter of milliseconds. Remember, in India, it’s not the large loans that matter. A lot of loans which are happening in the ecosystem are less than 200 lakh rupees, right? Just 2 lakh rupees or below are also a large amount of loans which happen. there is a financial risk fraud indicator which the department has created banks can dip into that risk indicator and also get a score out of that to say okay what is it that we are really you know trying to get out of this all this is what the layer is let’s look at what vs telcos are doing vs telcos are giving you trust to say that the call you are giving call you’re receiving is spam free or not right we have got a at least three products launch over last one year we first launch our you know solution which warned you about a suspected spam right then we went ahead we started blocking fraudulent links you know basis the large database we created with you know global players like google and open fish and mavener at the third stage we just launched around two weeks back a very powerful product you know one of the reasons for spam is urgency that i’m calling you please share your otp urgently right and to remove that now we have created a friction you know one of the reasons for spam is urgency that i’m calling you please share your otp urgently right if you are on a call you get a flash message saying please be careful you are on a call you’re receiving otp this may be spam so it creates a friction for those 30 seconds to say do you want to really do this or not all this is what this is uh reinforcement of the trust we want to create in the ecosystem let me go a little larger uh we are operating in large countries uh uh you know across the globe and one of the things we have been doing wonderfully well in africa is to really take the digital public infrastructure blueprints from india and take them to africa uh so it’s all about identity it’s about payments you know it’s about how they are able to transact and we have got a solution called dpi inbox right which we are in conversation with a lot of african leaders to be able to transplant the india stack onto the african ecosystem and how do we do that we are giving a bundle of hardware and a software we are giving a very air -capped cloud you to do that and we are creating the entire ecosystem for them so that they are able to implement a digital public infrastructure stack in their countries.

So really, Devashish, it’s about trust which we try to create with infrastructure layer but we get smart and make people’s life easier and customers’ life easier is what we are

Debashish Chakraborty

Thanks, Rahul. Those were very key messages which you gave in which the network is being used for citizen -centric services and that’s how the network has evolved the last few years. Coming to you, Martin. Martin represents Vodafone Idea. You heard Rahul speaking about how the network is being used for various citizen -centric services for fraud mitigation, for taking care of spams. A lot is being done by the mobile network operators, right? But my question here is there’s also a growing discussion globally today about avoiding parallel digital infrastructure. structures. India is building new DPI trust layers for authentication and fraud prevention. How do we ensure that the efforts which the MNOs, the mobile network operators are making adding layers, how do we ensure that there is no that these complement and not duplicate the operator -led capabilities like Open Gateway APIs that GSMA has?

Speaker 1

So, in fact, I was part of one of the entity which set up and contributes to the largest DPI infrastructure today. I used to earlier be associated with the NPCI and then moved on here to Telco for the past five years now. So, the overall DPI infra, if I were to go by, I would want to answer this by bringing in four key words that I want to associate myself with in this. One, context and enrichment. And the second thing that I wanted to touch upon is serviceability and purpose. So when the entire DPI infrastructure evolved for the country, it evolved with two core purposes to be addressed with, right? So we were wanting to take the entire digital infrastructure to reach the last mile civilian.

We also had the objective of financial inclusion to be driven by the country. So the DPI framework was created to meet these two core objectives. The role of a TSP in this, by and large, was to ensure that the goal of digital India and financial inclusion landed up reaching the masses. That’s the role that TSP played. And with every net new tech evolution that has happened, there are various things that come in. Fraud evolved, so because banking happened in doorstep. fraud also started happening in doorstep. You don’t have to go and loot a bank today. You can loot thousands of individuals in the most easiest manner and fraud evolved that way. So in each of these contexts, while we realize the Digital India vision and the financial inclusion for the country as a whole, the DPI networks played a role, TSP’s played a role to ensure that these realizations come in handy.

Now, Rahul briefly touched upon a few of them. We are limited TSPs in the country, three, four of us comprehensively, who work in conjunction context. Amongst us three, we land up working together. So I still remember those days when, as from my previous entity going to TRAI, asking them to land up sitting up, how do I find out fraudulent mobile numbers yearly? Today, we look at it as FRI, which is exposed by, the DOT themselves today to multiple other financial institutes, which can go and look up into and then take a decision. decision. There is something called as digital intelligence platform, which again, amongst all three collaborated TSP data, which is converged and provided by the DOT themselves to rest of the financial institutions to look into.

Now, all of these, I will bring back my word around context, right? So these are information that multiple of us as TSPs are able to provide, provide, collate and make it available. Who can consume? Any of these providers, because fraud is not happening to me as TSP. For me, if there is a call that is connected between person A and a person B, it’s revenue to me. But for a bank, while in call, something else is going, that is a context. And this context is something that you can provide back to enrich the data. And with this enriched data, making a decision making for what do they want to. I see an Aadhar, verification happening live from a location called A.

while at the same time there is a call happening showing that the presence of the person is in B, it does not matter to a telco because for me both are actually revenue. But for an authentication entity versus an entity which is approving a financial transaction, they may consider them as a fraud. So the context and enrichment of the context associated with the data, TSP today has the ability to provide a large amount of context -driven information to these individual players whereby they can consume them for their own utilization and make active decisions. So that’s the way that I would want to try and comment. One good part is at least all three of us, four of us are operated in converged platform.

We have done the experience with DLT that we set up during the earlier days of spams. Now spams were those days only. The unwanted telemarketers messages that were coming, it has evolved to spam. Spam has become scam. So now we are working towards how do we overcome scam beyond scam, whatever comes. Now there are digital errors, humongous money being lost. So as TSPs, we work in conjunction, put them in order, collaborate with the likes of COI and DOT to set up infrastructure as open APIs and then allow these APIs as interfaceable for institutions who would want to take decisions appropriately. Rahul touched upon digital lending, right? So there is not only, if you look at countries serviced today by more than 1100 member banks across the country.

We might be knowing as sitting in metros, we might be remembering only few banks, but to service such a large nation, we have 1100 member banks. Imagine these guys don’t have to always go back to civil only and provide a lending. You may want to relate back by postpaid consumers, the quantum of money that they pay frequently, etc. It’s an inclusive decision. Those are open APIs we are able to set up. And India is. We have been forefront to set it up and we have operated it way too well already. is what I would want.

Debashish Chakraborty

By the way, your team is also working extensively with the GSM team on the GSM OpenGate APIs. Many of them have been even certified now. I can tell you that. Thanks for that context in which you are talking about contextualization of data. That’s again a unique perspective that you’re talking about. Moving on to Deepak Maheshwari. Deepak represents CSCP, Center for Social and Economic Progress. Deepak, you have been attending and speaking in this conference for the last couple of days. Data sovereignty, I’m sure, is a term which you would have encountered several times. I want to ask you this, Deepak. How should India define data sovereignty in an AI -driven DPI era beyond just data localization and control?

But how should India define data sovereignty without control over standards, decision -making systems, and long -term strategy? strategic autonomy.

Deepak Maheshwari

Thank you, GSMF, for having us here. When we are looking at this whole issue about digital sovereignty, data localization, etc., and data localization itself, we could look at it in different ways. For example, it could be about just the physical location of the data. That’s one. That’s a pretty obvious one. The second is also about data context, as Matan was just mentioning, in terms of what is the local context. So, for example, a lot of people think about data localization only in terms of local languages. But suppose you are seeing weather, and it shows you weather in Hindi here in Delhi, but of New York, probably it might not be that useful. So you also need local context.

And then beyond all these things, awfully what is happening is, and again, this is not such a new concept of sovereignty as such. So people have been talking about sovereignty. It’s been around for a fairly long time. Of course, the terminal of sovereignty is the fact that it’s not just about the data the lexicon has evolved but this whole notion has awfully become much more important for example even in India we had the digital, when we looked at the previous versions of the data production law if you look at the previous reports which never become the policy which is the non -personal data framework again in all those things we had this notion that India’s data should remain in India.

Another thing I mean in February 2019, 7 years back we had something called draft e -commerce policy. Now the tagline of that however was India’s data for India’s development. It was not about commerce. It was more about data. So from that perspective when we look at today and even when I was member of the METI’s committee in 2018 when first time the government set up a committee on AI, again this whole thing came up that okay what about data here. Now this is something that we need to look at in three different ways. One is yes Yes, there is some sort of data which India should have within its own physical as well as administrative control. So obviously things related to defence, national finances, etc., you would like to do that.

Second is as far as citizens’ data is concerned, some of that data, yes, so UIDI, voter database, etc., obviously that type of thing, yes. But there is other type of data for which citizens themselves may like to exercise their choice and may like to exercise their own agency in terms of using that data not only in India but also outside India. For example, if I apply for a visa to another country, I will have to provide my data to that country. So there is no way that it can happen without that. And then the third thing is in terms of business aspect when we look at it. Now in terms of businesses, on one hand we are seeing in India, and we are very proud of it, that for the past, three decades, we have emerged as a global outsourcing hub.

are the global hub for data coming from all over the world and which is being processed here. But at the same time, if we try to create these walls around us, that okay, India’s data cannot go outside, but we expect that outside data should continue to come in. I think there’s a challenge in that. There’s a dilemma in that. There’s a dacotomy. Because these are walls. If we create these walls, and these are not walls, because in fluid dynamics, if we go back to our school physics, the walls are something that do allow one -way traffic, not two -way traffic. But walls are two -way isolations. So that’s another thing that we should keep in mind.

So when we’re talking about digital sovereignty within the context of AI, yes, obviously, there are things that we do want to have here and we should continue to do that. But there are also things where we do need more collaboration. So for example, one of the terms that he used was about control. a school, and you’re talking about a school, and you’re talking about I would like to control the standards so much as contribute to those standards. So, for example, whether it is GSMA or CGPP, ISO, ITU, IEEE, et cetera, I mean, so many other standard organizations, whether they are plurilateral, whether they are multistakeholder in whichever form, they all have certain mechanisms of people and countries to participate in that decision making.

So rather than controlling that standard, the effort should be, the endeavor should be about contributing to that standard making as a participant, as a contributor, and then evolve it. Obviously, when you are contributing and you are collaborating, you won’t have everything your own way. There will always be inevitably some give and take because sovereignty by itself in a globalized world has a challenge because the moment we talk about any international organization, we are talking about international organizations. whether it is UN, whether it is WTO, whether it is ITU, whether it is an organization like GSMA, if we want to work there, we’ll have to give up something to get something. The important thing is how do we create an institutional mechanism that we have, are in a position that whatever we are giving, we believe that we are getting more than that.

So there should be some sort of incentives around that. And the last thing that I want to mention is that, yes, often we have been talking about that India’s digital public infrastructure itself is a massive digitalization which is happening, but actually it is not so new. It’s more than one and a half centuries old. Because the original telecom networks that came was in the telegraph era, and that was also in dots and dashes. So it was a binary world even at that time. And people may or may not believe it, but India got its first sub – cable in the same year that the US got. And that was in 1858. Just four years after the first submarine cable came up first time between UK and France.

India got its first law Vivek in 1854. The first Indian Telegraph Act came in 1854. I have written a lot about this in this report. I mean it is available online at CESAC website if people are interested. Using a 3C framework. So carriage, content and conduct. But what is more important is in this world of AI is not just the carriage which is of course fundamental as I mentioned. Carriage is fundamental because without that you just won’t be able to do anything. Content, what’s going through it. But more importantly in terms of

Debashish Chakraborty

Beautiful insights. Thanks for taking us back to the concept of walls and walls. I like to come to Mansi now. Mansi sitting here is representing the World Bank. Manasi, from World Bank’s experience, we are talking about standards and we are talking about the DPI era. What are the risks you see when public digital infrastructure and private digital capabilities, Matan spoke about it briefly, when these two, the public digital infrastructure and private digital capabilities are built in silos, and why are global standards essential in accelerating inclusive digital outcomes?

Mansi Kedia

and Raul spoke about a lot. So systems coming together help build trust and therefore having independent systems means there are more points of, more vulnerability in the system. So systems come together to build trust. Systems have to come together for efficiency. I think that’s the biggest economic argument against a lot of things that you were saying about why banks are coming together, why is data coming together. So that is the, efficiency is the other thing. And the third which was mentioned but again not articulated was innovation. So how mobile data is now becoming a source of data for lending. I mean why are we using that as understanding credit risk and fraud risk and not something else.

So there’s innovation happening on something that was never understood to be for that purpose. So systems that operate in silos, whether it is the public sector or the private sector. Close it. Sorry. Whether it’s the public sector. Maybe it’s off. Oh I didn’t have it on, I’m sorry. I have a loud voice, so I hope everyone was able to listen to me. So I think the risk of building systems in silos, whether it is the public sector or the private sector, is essentially missing out on efficiency capabilities, innovation capabilities, and building trusted ecosystems, which is actually nothing but the foundations of digital public infrastructure. You used standards. I think the World Bank works more towards the ideas of blueprints.

We have been doing a lot of work on trying to develop blueprints, which are slightly more flexible, adaptable, but bring together best practices from different countries and see how they can be made more adaptable to different contexts, something that Deepak sir was saying in his initial remarks, that you want to make systems that bring you the operational ideas and principles, but don’t necessarily require. They may be prescriptive in terms of how they need to do some. So when you have a standard, you know it’s prescriptive, and that’s how the networks are running. So for that, you need a standard. But when you’re building systems. I think the World Bank is approaching it more from a blueprint point of view.

So last year, the bank came up with a digital public infrastructure and development report where it articulated what it meant by digital public infrastructure. What are its principles? What are the objectives? What is DPI? What is not DPI? And I think that’s the way we are going to go ahead, even with AI, AI commons, building common infrastructure, to be able to determine the pathways for the future, which countries can adapt to in their ways. It need not necessarily become, I mean, I’m just trying to distinguish between standards and blueprints here, because standards then get into ideas of commercialization and, you know, there has to be a process around it and there’s a whole private sector play.

Here there’s a private sector play and a public sector play, but the idea is to work more on the approach than on a particular way of running something.

Debashish Chakraborty

perspective back for data sovereignty. So I’d like to ask you as AI moves deeper into network operations, right, not just at the surface level, what does data sovereignty practically mean for an operator in terms of data storage and control, edge processing, cloud reliance, control of the AI models?

Rahul Vatts

Yeah, thank you. I think one of the biggest misconceptions we all have today is, you know, what exactly is sovereignty? And a lot of people confuse to say that any hyperscale account, if it is housed in India, for example, or that country becomes a sovereign, you know, infrastructure. I think nothing can be away from growth than that statement. Why do I say that? I think if I have to define what is really sovereign for me, I will at least take three or four slices, you know, into it. first slice for me will be is the data residing in the country or not and the answer to that may be yes you know it may be residing in the country it’s not a big deal hyperscaler clouds do reside in the country the second indicator for me will be is there a digital sovereignty you know in that data and digital sovereignty for me will be is the control plane of that cloud within India or not in India right how are you really controlling that data and the cloud and the answer to that is not a single hyperscaler will have the control plane in this country that’s the fact the third indicator or a slice for me will be really about the operational sovereignty so you are saying that you want to upgrade the network you want to put a patch on the network right you want to put a software in the network where are you doing it from the fact is you are not doing it locally again most likely you are again doing it outside the fourth indicator for me and a very important one is the jurisdictional sovereignty right today under the US cloud act for example is it not true that if the US government so wants they can demand data now why should any other territorial power have a control on my data right so for me while the answer for data sovereignty may be it is locally residing but the fact is the control pane will not be in this country the fact is that we will not have even the patches coming up within this country and the fact is that we will be subject to jurisdictional controls so how are telcos you know getting aware about this only last week I read about DT you know Daoshi Telecom and they’ll just launch the sovereign cloud offering in Europe why did they launch and by the way six months ago Airtel launched its own sovereign cloud offering and the answer to us was very simple we were already managing data of nearly 500 million people and we were able to get a lot of data and we were able to get a lot of data in our network and we realized where is the data housed?

We said within our own networks. So we really have the capabilities to manage that complex data set. Then why is it that I cannot offer the same thing to my customers? And that’s where this whole, and that’s why telcos are having a renewed interest into getting into the sovereign situation. Why is it important? And let me be very selective about this. Do we need hyperscaler clouds in the country? I’m saying yes, we do need. Because if there are efficiencies of scale, if there are better products to be used, why not? But tell me, why should a KYC data of my customer be sitting outside with somebody? Why should the health record of citizens in this country be sitting outside this country?

Why should any critical data set which relates to defense or security agencies sitting outside this country? I think we have to get selective. We use the efficiencies of scale to the best party who is available to give that solution. But we should get selective. Get selective on what data? should reside and remain in control within this jurisdiction. I think that is an important part and that I think is a discussion we need to do. If I go to the market today, there are a lot of players selling Sovereign Cloud. But really, I mean, there is no sovereignty which is involved. But I think AI rests on data, right? And we cannot take the right decisions on data if we cannot really control it in the proper sense.

Hence, we require dynamism in our regulations and policies, but we also require sovereignty to be practiced in real sense for us to be able to do that. Airtel Cloud, which we made, we do around 140 crore transactions per second. That’s the bandwidth we have built. It was very interesting that day when the Prime Minister came to Airtel stall, he was asking, Rahul, what is the capacity of the thing you have created? And I told him, you tell me, sir, what is the capacity you want us to create, right? It’s really up to you. You have to guide us and say, we want to have these multiple use cases. Thank you. lining up the country and we are most happy to do that.

So I think we are in a very good place. We have got very robust infrastructure. And how do we now navigate this world of AI and provide a real opportunity and sense to our players within the ecosystem is what we are really looking forward to.

Debashish Chakraborty

You reminded me from this conversation which we were having just a couple of days ago when someone was talking about data sovereignty and he said, it’s so utopian to talk about data sovereignty where if we slice and dice, then you realize where is the sovereignty. And you touched on that. Thanks for that point. Martin, I’ll come back to you. This was actually meant for Ambika, but you have to deal with this. So from Vodafone Ideas regulatory lens, what are the biggest policy frictions emerging as networks become AI -driven platforms? If you see any regulatory challenges, how can these be met with data sovereignty? slowing innovation?

Speaker 1

So I’ll try and answer them in two perspectives. We heard our Honorable PM mentioning AI being responsible and reasonable. The word he used was reasonable in nature, multiple location, right? So it brings in, and there are multiple other contexts with reasonability that comes our way, one being explainability, another being accountability, and so on and so forth. So today, if you look at we as TSPs, TSPs are governed under the ambit of what we want to call ourselves with unified license, which is narrated by DOT. In some of these examples that we, with Rahul touched about, I touched about, and whatever World Bank team as well related back, we are able to see that our portfolio has expanded beyond the conventional TSP governed under the US.

license and today looking at the expanded approach that we are offering to market whether monetization not monetization thank god at least the data privacy is enacted now apparently i’m also the dpo for the firm so by virtue of which when we touched upon this area called data localization or what we would say is data sovereignty i think we largely misinterpreted is my personal view around that data privacy the dpdp at least clarifies that data collected has to be defined with a purpose we put in with a purpose now thankfully although i’m a tsp base is we falling under the ambit of a significant data fiduciary most likely we will be also governed by the data privacy laws of the country So there are regulations which are governing us possibly properly well.

So if I narrate this in three or four broader perspective of looking at accountability and explainability, when we leverage AI, we would want the AI to come and explain. Now, is it covered under the ambit of UL or in the data privacy? Maybe no at all, right? So we would want somewhere, Mansi actually narrated it very well, which is we would want somewhere a referenceable standard coming our way, where all of us can relate back easily and apply back. It could be blueprint, it could be playbooks, it could be. So such framework, does it exist for easily adaptable manner? The larger entities like us, we will be the first one possibly to invent the way to do through, make it as a playbook.

Related back to somebody who can make it as a blueprint and make it as a standard, then apply back to. the rest of the industry as a whole. So that’s the first and foremost. So the role of a TSP also is changing today, right? So from a conventional telecom provider, today we are talking about the previous example that I highlighted as an intermediary providing additional data inside. Now there is a law for digital intermediaries. Now the purpose for which a civilian has shared the data to me is for some other purpose. But the purpose beyond the purpose that he has shared to me, if I have put it to from a monetization standpoint, do I apply the ambit of digital intermediary also on me?

That’s a, that’s a, I wouldn’t want to comment as a, should my regulator look upon that and then put that also as applicable to me. But those are evolving space that we are looking at. And the last very famous topic amongst telcos that is floating around is on the spam and the scam protection, right? So here, let’s look at from again, Honorable PM, perspective, which is reasonable AI. Most of us associate reasonable AI back to explain. Now, imagine we have deployed scam solution which auto blocks things and we would want that AI to explain. Why did I block you? If it were to be blocked, then what am I looking at? I’m actually advancing the ability of scamster to know why am I blocking him so that he refines himself to not get blocked.

So that comes in the context of security. Do I do I make a framework? Do I make a guideline to tell here I would not want to have an explainability where security becomes a far more important element as compared to. So frameworks have to evolve. We need to have standards, but standards do not have the ability to make it universally applicable in all possible manner. So standards are taken, applied back as per individual enterprises and the context that we have to put them to use and then make it work. So I look. Look forward. Regulators will be innovative in allowing us to make the choices as appropriately while regulations can continue to evolve appropriately.

Debashish Chakraborty

Thanks, Martin. I’ll take this conversation slightly global with my attempt, Deepak. How do you think India can leverage its DPI and telecom -led digital architecture to provide a credible, scalable model for the global south, particularly countries seeking digital sovereignty without technological isolation?

Deepak Maheshwari

Okay. So when we are looking at somebody offering any technical solution to someone else, typically it comes with certain – It often comes with certain intellectual IPRs, intellectual property rights. So, for example, somebody is using a particular technology, so there could be patterns, there could be copyright, et cetera. Now, when India is offering its DPI -led model, nothing of that sort is going. Okay. So countries are able to adopt. It’s a framework. It’s a philosophy. And there’s an open protocol. So they can adopt it. They can adopt it. and they can change it the way they wish. So it is really open in that sense. So that’s one very important difference compared to let’s say some other country or company offering some particular technology but then it also involves certain type of monetization in terms of this is what you continue to pay us if you are scaling it to let’s say 1 million population, this is what you will pay us if you are doing it for 10 million or 100 million, this is what you will do.

India doesn’t ask for that type of thing. So that’s one very strong distinction. The second thing is in terms of the enablement. The enablement is also happening not just in terms of offering this as a technical sort of assistance, it is also happening through multiple other organizations. So for example, we have a research and information systems think tank under the Ministry of External Affairs and others is the Indian Council of World Affairs. So they are also doing a lot of work in terms of developing intellectual frameworks and capacity to do this as a matter of diplomacy itself. so that’s another dimension which is not often seen but it’s again a matter of soft diplomacy so for example three years back in 23 at ICW again I had proposed a framework called EOSS which was again basically about taking DPI in India I mean you can of course create a different acronym etc globally and again the focus was more around interoperability security etc there the other aspect is about standards so Mansi did distinguish between standards and systems or blueprints as she mentioned but one very important document I would again refer to a World Bank only so of course she did mention about the DPI report but even more recent document which has just come up a couple of months back from World Bank is the World Standard Development Report on Standards okay so I mean we all you look at traffic lights you look at traffic lights and you look at the traffic lights and you look at the traffic lights okay the three red amber green And this traffic light, the current traffic light standard came up only in 1968.

It’s not very old, okay? But it did happen. And this has become globally acceptable. But the way the design is, yes, you can put it vertical, you can put it horizontal, and there are other variations. So this is what it is doing there. So I think the way India is doing this is something that we are doing a lot of enablement across the global south. In fact, I just published a policy brief called Global South’s AI Pivot by CG of Canada just last Friday. Again, it talks about three things, equity, ethics, and ecology. So India is not only talking about things like, okay, it should be reasonable, it should be responsible, it should be accessible, it should be inclusive, accessible, all of that.

But also looking at things from an efficiency perspective. Efficiency is not just financial efficiency. Here we are talking about resource efficiency. So how do we manage these things with minimum? footprint of material, of energy, water, things like that. So, and this again goes back to something like the Prime Minister keeps on talking about this life, which is lifestyle for environment. Now this whole philosophy of

Debashish Chakraborty

Thanks, Deepak. I’m conscious of time. Mansi, last one to you. You know, India’s approach to the DPI built on open, interoperable and scalable digital rails is increasingly influencing the global conversations. How do you see India’s DPI model shaping digital development strategies across emerging economies?

Mansi Kedia

Thank you. I’ll keep it really short. I think at the bank we started working on ID for development and G2P and fast payments even before this whole big DPI push happened in India and particularly that became more socialized through the G20 process. and many other actors came across foundations, think tanks, technology companies, and started to socialize the idea of DPI and the DPI approach to digital transformation. India, surely for the vast amount of experience and scale and heterogeneity that it has, offers excellent evidence on what works and what doesn’t work. And it’s really great that a lot of the people who were part of the foundation and building of the DPI have now gone ahead and tried to take this to other countries in a way that is adaptable to them.

And there are so many organizations, without taking names, lest I miss out on other important ones, I don’t want to take that chance, but there are several organizations who are doing a fabulous job of doing that. And the government itself, so whether actively or indirectly, they are also trying to talk to the world about how the DPI approach works. And more actively, you know, in UPI and NPCI, as Martin was mentioning, there’s active collaboration on making these fast -paced… and systems work in collaboration with BIS to see can we actually think of the Finternet, the idea of the Finternet that came up with BIS. So I don’t see this dying down. I think we have a lot of, like I said, evidence of the foundations as well as now sectoral applications.

So there are just particularly because this is GSMA session and mobile, I don’t want to forget mentioning this really important part about how the Department of Telecom has begun to think about utilizing mobile data while the telcos are thinking from credit perspective and fraud management. They’re also thinking of it very actively in terms of using it for planning and mobility, which I think is really fabulous. It’s not as if other countries haven’t done it, but the DPI approach that they are taking towards it to scale the access to data, to make models available, to provide compute, and build that whole stack is not something that has happened. And obviously it’s going to evolve. I don’t think it’s perfect.

feel the pressure of making it perfect at go but this learning experiences will surely inform how other countries can do it. Some of the things that we are trying to do it at population scale. Yes exactly.

Debashish Chakraborty

So I think if I can just have one question from the I can see three hands already how much time do we have? Do we have a question for two one question gentlemen please state your name and to whom do you want to address this question

Audience

Mike I am Vijay Agarwal and I am interested in AI by profession I am manufacturer of jewelry so what I wanted to propose was why don’t we have a product like a ring kind of product where the privacy data the KYC data resides on that physically only on that item which is on the body and then we can if it leaves the body it leaves in an encrypted form only and it can only be collated with another key for the purpose for which it has been consent has been given and there is a blockchain record to it.

Debashish Chakraborty

You mean in the form of a jewelry?

Audience

Yeah, so we have Adha ring for every Indian and it will store the KYC record, the medical record which could be accessed in case of emergency but there should, all these control layers that you are talking about could be in the form of cryptography. The concept of data embassies as part of the discussion on data sovereignty, so is there a good case for maybe India to offer data embassies? obviously it will be on a multilateral but any thoughts on that

Deepak Maheshwari

I would say yes if it is on reciprocal basis

Rahul Vatts

let me try and address the first part which you were trying to say I think today it’s not the problem of your data being insecure with Aadhaar I think it’s very secure right there are lot of things which Aadhaar does there is also the masking which they have started so the leakage of data or private data is really not the issue out here the data going out has got various other forms and factors particularly the way the government is taking the data from users it is the government which has to really start looking at for example telcos are required to share the subscriber data every month in physical copies why would you do that right so it is not really the digital aspect which is a problem it is really how you are managing the data is a problem and I think quantum work has already started sir I think Aadhaar itself is working on that on data embassies Vikram I think I completely endorse you know Deepak it cannot be just me right look around and have it and so let’s play it right but you cannot expect the world’s largest data creator and consumer to be the ones to start offering this first it is a two way street right for too long I think as a country we have been you know in a sphere where we are supposed to give and we are not supposed to take anything that has to change

Debashish Chakraborty

the organizer is already standing on my head just wanted to say one thing only mentioned in terms of government taking data so about 20 not now of course now IRCT doesn’t do it but till about 15 years back or so if you are creating an IRCTC ID for first time it used to ask even your marital status and there were apparently no benefits or disadvantages and it was a compulsory field by the way I would like to thank each of the speakers here to make it a very engaging conversation, thank you Mansi Rahul, Deepak, Matan for your time and to have this session, thank you very much audience thank you Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (38)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“The session opened with Debashish Chakraborty introducing the theme of “convergence of AI, telecom and data sovereignty all weaved around the digital public infrastructure” and handing over to Julian Gorman, head of APAC GSMA, for the keynote address.”

The knowledge base explicitly mentions the convergence of AI, telecom and data sovereignty around digital public infrastructure and references Julian Gorman as head of APAC GSMA being invited to give the keynote, confirming the report’s description.

Additional Contextmedium

“Julian Gorman positioned GSMA as the global body that unites the mobile economy to “unlock the power of connectivity so industry and society thrive”.”

GSMA’s broader mandate is described in the knowledge base as promoting digital, mobile‑based solutions worldwide and advancing innovative solutions that drive economic growth, providing context for its role as a unifying global body.

Additional Contextmedium

“Digital sovereignty is defined as strategic control over infrastructure, standards and the underlying intelligence.”

The knowledge base defines digital sovereignty as a nation’s ability to understand, develop and regulate digital technologies to maintain control and self‑determination, adding nuance to the report’s definition.

Additional Contextlow

“Rahul Vatts highlighted the scale of India’s Digital Public Infrastructure (DPI), noting UPI’s massive transaction volume.”

The knowledge base notes that UPI is the world’s largest digital payment system, underscoring its massive scale and supporting the claim of a large‑volume DPI, though it does not provide the exact monetary figure cited.

Confirmedlow

“The discussion was part of a panel titled “AI in Digital Public Infrastructure (DPI) – India AI Impact Summit”.”

A panel discussion on AI in DPI at the India AI Impact Summit is recorded in the knowledge base, confirming the existence of such a session.

External Sources (116)
S1
Building Indias Digital and Industrial Future with AI — – Deepak Maheshwari- Rahul Vatts – Rahul Vatts- Deepak Maheshwari
S2
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S3
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S4
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S5
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — -Mr. Julian Gorman: Representative from GSMA, expert in telecom industry collaboration and anti-scam initiatives across …
S6
Building Indias Digital and Industrial Future with AI — -Julian Gorman- Head of APAC GSMA
S7
Building Indias Digital and Industrial Future with AI — -Debashish Chakraborty- Moderator, represents GSMA
S8
https://dig.watch/event/india-ai-impact-summit-2026/building-indias-digital-and-industrial-future-with-ai — As India advances in digital public infrastructure and its AI ambitions, the key is how we ensure these systems remain t…
S9
WSIS 2018 – High-level policy statements: concluding session — Mr Deepak Maheshwari, Symantec, facilitated the Moderated High-Level Policy Session 3 – Enabling environment, which focu…
S10
Building Indias Digital and Industrial Future with AI — By the way, your team is also working extensively with the GSM team on the GSM OpenGate APIs. Many of them have been eve…
S11
Building Indias Digital and Industrial Future with AI — -Mansi Kedia- Representative from World Bank
S12
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S13
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S14
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S15
Collaborative AI Network – Strengthening Skills Research and Innovation — “We’re talking of AI being a possible DPI, a digital public infrastructure.”[1]. “I think those are aspects which a DPI …
S16
High-level AI Standards panel — Effective coordination requires mechanisms for standards development organizations to coordinate globally through strate…
S17
https://dig.watch/event/india-ai-impact-summit-2026/the-foundation-of-ai-democratizing-compute-data-infrastructure — Thank you. So I think two characteristics of digital public infrastructure, which are key, are to ensure that not only t…
S18
https://dig.watch/event/india-ai-impact-summit-2026/shaping-ais-story-trust-responsibility-real-world-outcomes — You can see the actual data. You can see the actual data. You can see the actual data. You can see the actual data. You …
S19
OPENING CEREMONY | IGF 2023 — It is crucial to maintain an open, free, global, interoperable, secure, and trustworthy Internet. This requires effectiv…
S20
https://dig.watch/event/india-ai-impact-summit-2026/building-population-scale-digital-public-infrastructure-for-ai — And why would we need a hub like this to do that? Well, one of the big barriers that we are currently seeing is the frag…
S21
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — “The scale is directly proportional to the trust we built in the system, for sure.”[41]. “with each other and today UPI …
S22
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-data-sovereignty-india-ai-impact-summit — So Sunil, we’ll just come back to that. We’ll just get everybody else in and then we’ll speak about your examples. I’m j…
S23
Contents — – Increase legal certainty in the use of data. Data-based value added is dependent on the existence of legal certainty. …
S24
Connecting Digital Economies: Policy Recommendations for Cross-Border Payments — Government guidelines on standards can be another effective way to endorse the adoption of internationally accepted paym…
S25
Informal Stakeholder Consultation Session — By offering open APIs and modular design, these platforms have lowered entry barriers for startups and MSMEs, enabling t…
S26
Cloud computing and data localisation: Lessons on jurisdiction — A more nuanced approach to the movement of data could be undertaken, similar to how trade has evolved from goods to serv…
S27
Criminal justice access to electronic evidence in the cloud: — – -It is often not obvious for criminal justice authorities in which jurisdiction the data is stored and/or which…
S28
Webinar digest: Issues and concerns when moving to the cloud — What we often take for granted are legal issues, and issues related to security. For example, we rarely bother to find o…
S29
Practical Guide to Cloud Computing Version 3.0 — – SaaS offerings are accessible over the public Internet which makes it very easy to roll them out to a large audience w…
S30
Data embassies: Protecting nations in the cloud — However, there are also technological challenges associated with protecting the integrity and confidentiality of critica…
S31
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — 3. Contextualising Policies and Technologies: 5. Promoting research-driven policy formulation Adamma Isamade: thank yo…
S32
AI as critical infrastructure for continuity in public services — Data sovereignty requires control over jurisdiction, keys, and infrastructure beyond just local data storage
S33
Host Country Open Stage — Infrastructure | Legal and regulatory | Human rights Silva contends that digital sovereignty means ensuring platforms a…
S34
Digital Public Infrastructure, Policy Harmonization, and Digital Cooperation — Marie Ndé Sene Ahouantchede explains that ECOWAS views public digital infrastructure as built on three pillars: payment …
S35
Regulating Open Data_ Principles Challenges and Opportunities — Digital ecosystems simply do not function in silos. However, enabling data to move across borders should not mean that c…
S36
The Foundation of AI Democratizing Compute Data Infrastructure — Thank you. So I think two characteristics of digital public infrastructure, which are key, are to ensure that not only t…
S37
Secure Finance Risk-Based AI Policy for the Banking Sector — Consumer centric safeguards obviously by way of transparent disclosure clear appeal processes and human intervention mec…
S38
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — We joke that we shouldn’t worry about AI until we figure out AV. So I guess this is a perfect example of that. Thanks fo…
S39
A digital public infrastructure strategy for sustainable development – Exploring effective possibilities for regional cooperation (University of Western Australia) — DPI offers various benefits for meeting the SDGs through effective data collection and utilisation. According to a polic…
S40
WS #83 the Relevance of Dpgs for Advancing Regional DPI Approaches — ### India: Flexible Modular Architecture Brazil’s PIX payment system exemplifies successful regional innovation, now ex…
S41
Building Scalable AI Through Global South Partnerships — I was just going to do one more thing, which is thank you, Shalini, and thank you to the panel for allowing us this smal…
S42
What is it about AI that we need to regulate? — Addressing the Tension Between Digital Sovereignty and Global Internet InteroperabilityThe tension between digital sover…
S43
WS #257 Emerging Norms for Digital Public Infrastructure — Jyoti Panday: Good morning, everyone. As Professor Muller introduced me, I’m Jyoti Pandey, I work with him at the Inte…
S44
The State of Digital Fragmentation (Digital Policy Alert) — In terms of data governance, the analysis emphasises the need for dialogue and finding common ground for global data gov…
S45
Building Indias Digital and Industrial Future with AI — “India, surely for the vast amount of experience and scale and heterogeneity that it has, offers excellent evidence on w…
S46
AI Meets Agriculture Building Food Security and Climate Resilien — And this is not proprietary. It is being designed as a replicable public infrastructure model for India and the entire g…
S47
A digital public infrastructure strategy for sustainable development – Exploring effective possibilities for regional cooperation (University of Western Australia) — DPI offers various benefits for meeting the SDGs through effective data collection and utilisation. According to a polic…
S48
WS #257 Data for Impact Equitable Sustainable DPI Data Governance — High level of consensus on fundamental principles and challenges, with speakers from different sectors (public, private,…
S49
WS #179 Navigating Online Safety for Children and Youth — 1. Global Standards vs Local Adaptation: Keith Andere highlighted the need to adapt global standards to local contexts a…
S50
Driving Indias AI Future Growth Innovation and Impact — The innovate side really comes down to. Areas like skilling, which I know when Minister Chaudhry joins us, we will get i…
S51
INTERNET SECURITY THREAT REPORT 2015 — The potential exposure of personal data from health-monitoring devices could have serious consequences for individuals, …
S52
Data Policy in the Fourth Industrial Revolution: Insights on personal data — – -Private or public facts – the information may have been made available, shared or publ icly posted by the individual …
S53
Data embassies: Protecting nations in the cloud — In the classical sense, embassies have always served as shelters for the people they represent, thereby helping to reduc…
S54
Open Forum #7 Deepen Cooperation on Governance, Bridge the Digital Divide — 1. Balancing data sovereignty concerns with the benefits of global cloud infrastructure.
S55
Panel Discussion Data Sovereignty India AI Impact Summit — Low to moderate disagreement level with high strategic alignment. The disagreements are primarily tactical and reflect d…
S56
WS #180 Protecting Internet data flows in trade policy initiatives — This comment deepened the analysis by highlighting the need for more nuanced understanding of these concepts. It led to …
S57
Workshop 2: The Interplay Between Digital Sovereignty and Development — Sofie Schönborn: the context for our interactive discussion. Thank you. Thank you so very much. It’s a pleasure to be he…
S58
Main Session on Future of Digital Governance | IGF 2023 — Lastly, the importance of building upon existing principles in policy-making and creating new solutions was highlighted….
S59
BPF: CYBERSECURITY — Collaboration, resource sharing, and avoiding duplication of efforts were emphasized as crucial in the fight against cyb…
S60
Charting an inclusive path for digitalisation and a green transition for all — The analysis also emphasises the importance of open data and data sharing policies to eliminate duplication of efforts. …
S61
WSIS Action Line C7: E-Agriculture — Aminata argues that while e-agricultural solutions are increasingly based on AI and data, the lack of accessible data pr…
S62
Building Indias Digital and Industrial Future with AI — Thank you, Julian. Thanks for the opening remarks. Am I audible? Looks like yes. So let’s begin. We have a fantastic pan…
S63
Collaborative AI Network – Strengthening Skills Research and Innovation — “We’re talking of AI being a possible DPI, a digital public infrastructure.”[1]. “I think those are aspects which a DPI …
S64
DPI+H – health for all through digital public infrastructure — DPI was portrayed not just as infrastructure but as part of an inclusive ecosystem involving legal, financial, and socie…
S65
AI as critical infrastructure for continuity in public services — The discussion revealed that data sovereignty encompasses more than simple data localization. As Pramod noted, true sove…
S66
Host Country Open Stage — This paradoxical statement challenges the typical understanding of digital sovereignty as protectionist or isolationist….
S67
Digital Public Infrastructure, Policy Harmonization, and Digital Cooperation — Marie Ndé Sene Ahouantchede explains that ECOWAS views public digital infrastructure as built on three pillars: payment …
S68
Co-facilitators of Global Digital Compact process issue assessment from deep dives and consultations — In aletter dated 1 September 2023and transmitted to all permanent representatives and permanent observers to the UN in N…
S69
IGF LAC Space — Maintaining an open, secure, and interoperable internet while avoiding fragmentation was another key point of discussion…
S70
WS #257 Emerging Norms for Digital Public Infrastructure — Interoperability is key for enabling cross-border use of DPI and preventing fragmentation.
S71
Secure Finance Risk-Based AI Policy for the Banking Sector — Consumer centric safeguards obviously by way of transparent disclosure clear appeal processes and human intervention mec…
S72
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — We joke that we shouldn’t worry about AI until we figure out AV. So I guess this is a perfect example of that. Thanks fo…
S73
WS #83 the Relevance of Dpgs for Advancing Regional DPI Approaches — ### India: Flexible Modular Architecture Brazil’s PIX payment system exemplifies successful regional innovation, now ex…
S74
Building Scalable AI Through Global South Partnerships — I was just going to do one more thing, which is thank you, Shalini, and thank you to the panel for allowing us this smal…
S75
AI for agriculture Scaling Intelegence for food and climate resiliance — It is being designed as a replicable public infrastructure model for India and the entire global south. In partnership w…
S76
A digital public infrastructure strategy for sustainable development – Exploring effective possibilities for regional cooperation (University of Western Australia) — DPI offers various benefits for meeting the SDGs through effective data collection and utilisation. According to a polic…
S77
Responsible AI for Shared Prosperity — The tone was consistently optimistic and collaborative throughout, with speakers expressing urgency about the civilizati…
S78
How Multilingual AI Bridges the Gap to Inclusive Access — The tone was consistently collaborative, optimistic, and mission-driven throughout the conversation. Speakers demonstrat…
S79
Opening of the session — The tone began very positively and constructively, with the Chair commending delegations for focused, specific intervent…
S80
Afternoon session — The discussion began with a collaborative and appreciative tone as various stakeholders shared their visions and commitm…
S81
Launch / Award Event #52 Intelligent Society Development &amp; Governance Research — The discussion maintained a consistently optimistic and collaborative tone throughout. Speakers expressed enthusiasm abo…
S82
Accelerating Structural Transformation and Industrialization in Developing Countries: Navigating the Future with Advanced ICTs and Industry 4.0 — Very low level of disagreement. The speakers were largely aligned on goals and strategies, with differences mainly in em…
S83
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — The discussion maintained a consistently collaborative and solution-oriented tone throughout. It began with an authorita…
S84
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — The discussion maintained an optimistic and collaborative tone throughout, with speakers consistently emphasizing human …
S85
AI and Data Driving India’s Energy Transformation for Climate Solutions — The tone was collaborative and solution-oriented throughout, with speakers building on each other’s insights rather than…
S86
Informal Stakeholder Consultation Session — -Internet Governance and Multistakeholder Approach: Strong support for strengthening the Internet Governance Forum (IGF)…
S87
New Technologies and the Impact on Human Rights — The discussion maintained a collaborative and constructive tone throughout, despite addressing complex and sometimes con…
S88
Setting the Rules_ Global AI Standards for Growth and Governance — The discussion maintained a consistently collaborative and constructive tone throughout. Panelists demonstrated remarkab…
S89
Towards a Resilient Information Ecosystem: Balancing Platform Governance and Technology — The discussion maintained a professional, collaborative tone throughout, characterized by constructive problem-solving r…
S90
Multistakeholder Partnerships for Thriving AI Ecosystems — The tone was constructive and solution-oriented throughout, with speakers building on each other’s points rather than de…
S91
The New Delhi G20 Summit: Reflections from India — At the opening session of the Summit, the G20 agreed to admit by consensus the African Union (AU) as a permanent and equ…
S92
Keynote-Rishi Sunak — The tone was consistently optimistic and inspirational throughout. Sunak maintained an enthusiastic, forward-looking per…
S93
Any other business /Adoption of the report/ Closure of the session — In conclusion, the delegate’s remarks highlighted the enduring spirit of solidarity and collaboration, while also convey…
S94
Building the Workforce_ AI for Viksit Bharat 2047 — The tone was formal and optimistic throughout, maintaining a diplomatic and collaborative atmosphere. Speakers consisten…
S95
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S96
The Future of Digital Agriculture: Process for Progress — The necessity of forging strategic alliances to transverse sectoral barriers and to embrace digitalisation in agricultur…
S97
Ad Hoc Consultation: Tuesday 6th February, Afternoon session — Their call for consensus may be a strategic diplomatic move to present a united stance internationally, thereby circumve…
S98
[Parliamentary Session Closing] Closing remarks — The tone of the discussion was formal yet collaborative and appreciative. There was a sense of accomplishment for the wo…
S99
Masterclass#1 — Gratitude was expressed towards both presenters and participants for engaging in the dialogue.
S100
Unlocking potential: Addressing inclusivity barriers in e-commerce trade to deliver sustainable impact in communities everywhere (United Kingdom) — GSMA, the Global Association of the Mobile Industry, is actively involved in promoting digital, mobile-based solutions i…
S101
In response to G20, USAID launches the Women in the Digital Economy Initiative — Following the G20’s commitment to halve the digital gender gap by 2030, USAID launched the Women in the Digital Economy …
S102
Mobile industry continues to close digital divide and accelerate global impact across all SDGs, GSMA report — GSMApublishedthe fifth edition of itsMobile Industry Impact Report, which examines the increased impact the mobile indus…
S103
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Cristiano Amon — The equipment was different. The use case is different. We’re heading to the next big transformation of the telecom sect…
S104
Closure of the session — The purpose of this discussion was to gather input from delegations on the structure, modalities and key elements of the…
S105
De-briefing and Next steps — The session was possible because of cooperation
S106
WSIS Action Line C7: E-health – Fostering foundations for digital health transformation in the age of AI — The discussion maintained a professional, collaborative, and forward-looking tone throughout. It began with formal prese…
S107
Ad Hoc Consultation: Wednesday 31st January, Afternoon session — The purpose is to work towards a consensus
S108
Advancing digital identity in Africa while safeguarding sovereignty — A pivotal discussion on digital identity and sovereignty in developing countries unfolded at theInternet Governance Foru…
S109
Unlocking the EU digital future with eIDAS 2 and digital wallets — The EU, like the rest of the world, is experiencing a significantdigital transformationdriven by emerging technologies, …
S110
Day 0 Event #257 Enhancing Data Governance in the Public Sector — Belli defines digital sovereignty as a nation’s ability to understand, develop, and regulate digital technologies to mai…
S111
Digital identities: Issues and cases — One solution is through shifting the power in digital identification from the authorities to the person. This can be ach…
S112
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — And accessibility has to be also broadened in terms of multi -modality and also, where necessary, include a human in the…
S113
Digital Public Infrastructure: An innovative outcome of India’s G20 leadership — From latent concept to global consensus Not more than a couple of years back, this highly jingled acronym of the present…
S114
Multistakeholder Dialogue on National Digital Health Transformation — Vikram Pagaria: Distinguished delegates colleagues and friends it’s an honor to be here today I would like to extend my …
S115
High Level Session 2: Digital Public Goods and Global Digital Cooperation — Economic | Infrastructure | Inclusive finance Nilekani highlights India’s Unified Payments Interface (UPI) as the world…
S116
Leaders TalkX: ICT Applications Unlocking the Full Potential of Digital – Part II — Anil Kumar Lahoti:Thank you, Dana. First of all, I thank ITU for inviting me to this plus 20, and I consider this as my …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
J
Julian Gorman
2 arguments159 words per minute627 words235 seconds
Argument 1
Network as AI‑enabled public layer
EXPLANATION
Julian describes mobile networks as evolving from simple connectivity providers to intelligent, programmable layers that actively participate in AI model execution and service optimization. He emphasizes that networks now shape AI performance, edge processing, fraud prevention, and digital identity security.
EVIDENCE
He states that “Today’s mobile networks are becoming intelligent, programmable and trusted layers of the national infrastructure” and that they are “shaping how AI models perform and will perform and how services are optimised at the edge, how fraud is stopped before it happens and how digital identity remains secure” [13-15].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The opening keynote describes telecom networks as intelligent, programmable layers that support AI model execution and edge services, confirming the shift from passive carriers to AI-enabled infrastructure [S1]; a later remark calls the network an “intelligent fabric” reinforcing this view [S18].
MAJOR DISCUSSION POINT
Network as AI‑enabled public layer
AGREED WITH
Debashish Chakraborty, Rahul Vatts, Speaker 1
Argument 2
Strategic control through global standards
EXPLANATION
Julian argues that in an AI‑driven world, digital sovereignty requires strategic control over infrastructure, standards, and intelligence, not just data location. Global standards are essential to ensure safety, interoperability, and alignment with national priorities while staying connected to global markets.
EVIDENCE
He notes that “In an AI-driven world, sovereignty is no longer just about where the data is stored, it’s about having strategic control over the infrastructure” and that “the key to this is the ability to manage the infrastructure, the standards, and increasingly, the intelligence that underpins the national digital system” [19-21]; he adds that “Fragmentation, whether technical, regulatory, or geopolitical, slows down” and that “Interoperability, open APIs, harmonized frameworks, help countries scale confidently” [22-25].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussion on digital sovereignty stresses that control over infrastructure and standards is essential, not just data location [S8]; high-level AI standards coordination highlights the need for global, interoperable frameworks [S16]; multi-stakeholder internet governance underlines the role of open standards for safety and interoperability [S19].
MAJOR DISCUSSION POINT
Strategic control through global standards
AGREED WITH
Deepak Maheshwari, Mansi Kedia, Rahul Vatts
DISAGREED WITH
Mansi Kedia
D
Debashish Chakraborty
2 arguments127 words per minute1070 words503 seconds
Argument 1
Trusted, interoperable network needed
EXPLANATION
Debashish stresses that modern telecom networks must move beyond passive data carriers to become intelligent platforms that support AI, digital identity, fraud mitigation, and data sovereignty. He calls for networks that are trusted, interoperable, and globally compatible to avoid fragmentation.
EVIDENCE
He observes that “today’s network are no longer passive carriers of data. They are becoming intelligent platforms where AI is deployed… where digital identity is authenticated, where fraud is mitigated, where sovereignty over data and decision-making is increasingly exercised” [38-41]; he adds that “as India advances in digital public infrastructure and its AI ambitions, the key is how we ensure these systems remain trusted, interoperable, and globally compatible while avoiding fragmentation and duplication” [41-42].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Debashish’s call for trusted, interoperable networks aligns with remarks about moving beyond passive data carriers to intelligent platforms for AI, identity and fraud mitigation [S1]; concerns about fragmentation and the need for global compatibility are echoed in a fragmentation warning [S20].
MAJOR DISCUSSION POINT
Trusted, interoperable network needed
AGREED WITH
Julian Gorman, Rahul Vatts, Speaker 1
Argument 2
Need to avoid parallel DPI layers
EXPLANATION
Debashish raises the concern that multiple digital public infrastructure (DPI) trust layers could be built in parallel, leading to duplication and inefficiency. He asks how to ensure operator‑led capabilities complement rather than duplicate efforts such as open gateway APIs.
EVIDENCE
He asks, “How do we ensure that the efforts which the MNOs… are making adding layers, how do we ensure that there is no that these complement and not duplicate the operator-led capabilities like Open Gateway APIs that GSMA has?” [76-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The risk of duplicate DPI trust layers is highlighted in the same discussion about avoiding parallel efforts and ensuring operator-led capabilities complement GSMA OpenGateway APIs [S1]; broader fragmentation challenges are noted in a separate comment on multiple ventures causing duplication [S20].
MAJOR DISCUSSION POINT
Need to avoid parallel DPI layers
AGREED WITH
Julian Gorman, Rahul Vatts, Speaker 1, Deepak Maheshwari, Mansi Kedia
DISAGREED WITH
Rahul Vatts, Speaker 1
R
Rahul Vatts
6 arguments179 words per minute2128 words712 seconds
Argument 1
Airtel’s scale builds trust via OTP & sovereign cloud
EXPLANATION
Rahul highlights Airtel’s massive infrastructure—millions of base stations, extensive fiber, and edge data centers—that underpins trust‑critical services like OTPs, Aadhaar‑enabled payments, and fraud detection. He also describes Airtel’s sovereign cloud offering as a way to keep critical data under national control.
EVIDENCE
He cites that “India transacted 28 lakh crores rupees of money through its UPI infrastructure” and that this rests on “more than a million BTSs… more than 500 lakh kilometres of fiber… more than a thousand edge and large hyperscale data centers” [55-62]; later he explains that Airtel launched a “sovereign cloud offering” and that they manage “140 crore transactions per second” and keep data within their own networks [239-244][262-270].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Airtel’s massive BTS, fiber and edge-data-center footprint is cited as the foundation of trust-critical services like OTP and UPI, matching the description of scale and trust in the AI summit summary [S21]; additional remarks on Airtel Cloud’s transaction capacity reinforce the sovereign-cloud narrative [S8].
MAJOR DISCUSSION POINT
Airtel’s scale builds trust via OTP & sovereign cloud
AGREED WITH
Julian Gorman, Deepak Maheshwari, Mansi Kedia
DISAGREED WITH
Debashish Chakraborty, Speaker 1
Argument 2
Four‑slice model of data sovereignty
EXPLANATION
Rahul proposes a four‑slice framework for data sovereignty: physical residency, control‑plane jurisdiction, operational control, and jurisdictional/legal authority. He argues that true sovereignty requires more than data location, encompassing who controls the cloud and software updates.
EVIDENCE
He outlines the slices: “is the data residing in the country…”; “is the control plane of that cloud within India…”; “where are you doing software patches…”; and “jurisdictional sovereignty… US CLOUD Act” [235-250].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The four-slice framework (physical residency, control-plane, operational control, jurisdiction) is directly referenced in a discussion that challenges the notion that data localisation alone ensures sovereignty [S1]; a nuanced approach to data movement and jurisdiction further supports this model [S26].
MAJOR DISCUSSION POINT
Four‑slice model of data sovereignty
AGREED WITH
Deepak Maheshwari, Speaker 1, Julian Gorman
DISAGREED WITH
Speaker 1
Argument 3
Bank APIs for digital lending illustrate open standards
EXPLANATION
Rahul describes how Airtel provides banks with real‑time telco indicators to assess credit risk, enabling rapid, low‑value loan decisions. This demonstrates the use of open APIs to share enriched data for financial inclusion.
EVIDENCE
He explains that “we have got solutions where banks can use the telco indicators to make a smart choice about giving you loans… we rank a person’s history based on a low risk or a high risk which enables the bank to be able to take a smart decision in a matter of milliseconds” [65-67].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The use of telco-derived indicators via open APIs for real-time credit decisions mirrors the description of open API-based contextual enrichment provided by other speakers [S1]; similar open-API ecosystems that lower entry barriers are discussed in an open-API platform overview [S25].
MAJOR DISCUSSION POINT
Bank APIs for digital lending illustrate open standards
AGREED WITH
Speaker 1, Mansi Kedia, Deepak Maheshwari
Argument 4
Jurisdictional challenges under foreign cloud laws
EXPLANATION
Rahul points out that even if data is stored locally, foreign legal regimes like the US CLOUD Act can compel access, undermining true sovereignty. He stresses the need for selective data residency and control to protect critical datasets.
EVIDENCE
He notes that “under the US cloud act… the US government can demand data” and questions why “KYC data of my customer be sitting outside” or “health record… sitting outside this country” [236-244].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Legal analyses of cross-border cloud services note that foreign statutes such as the US CLOUD Act can compel data access, confirming the jurisdictional risk highlighted by Rahul [S26]; further discussion of ambiguous legal regimes for data stored abroad underscores the challenge [S27].
MAJOR DISCUSSION POINT
Jurisdictional challenges under foreign cloud laws
AGREED WITH
Speaker 1, Deepak Maheshwari
Argument 5
Sovereign cloud offering supports scalable rollout
EXPLANATION
Rahul emphasizes that Airtel’s sovereign cloud, with massive bandwidth and transaction capacity, provides a domestic platform for AI‑driven services, enabling scalable deployment for the ecosystem while keeping data under national jurisdiction.
EVIDENCE
He mentions that “Airtel Cloud… we do around 140 crore transactions per second” and that the Prime Minister asked about capacity, indicating high scalability and national relevance [262-270].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Airtel Cloud’s capacity to handle 140 crore transactions per second and its role in national AI services is documented in the summit recap on scale and trust [S8]; the broader observation that scale builds trust in digital infrastructure aligns with the same point [S21].
MAJOR DISCUSSION POINT
Sovereign cloud offering supports scalable rollout
Argument 6
Aadhaar security & data‑embassy discussion
EXPLANATION
Rahul defends Aadhaar’s security measures, noting masking and encryption, and acknowledges ongoing work on quantum‑resistant techniques and data‑embassy concepts. He argues that the challenge lies more in data management practices than in the technology itself.
EVIDENCE
He states that “Aadhaar is very secure… there is also the masking… the leakage of data or private data is really not the issue” and that “quantum work has already started” and that “Aadhaar itself is working on data embassies” [376-382].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Concepts of data embassies and cryptographic protection for personal records are explored in a technical note on data-embassy architectures [S30]; earlier remarks about using cryptographic layers for KYC/medical data on wearables echo this discussion [S8].
MAJOR DISCUSSION POINT
Aadhaar security & data‑embassy discussion
S
Speaker 1
4 arguments159 words per minute1687 words633 seconds
Argument 1
Contextual data enrichment via open APIs
EXPLANATION
Speaker 1 explains that telecom service providers (TSPs) add value by supplying contextual, enriched data through open APIs, enabling banks and other entities to make informed decisions. This enrichment turns raw call data into actionable insights for fraud detection and authentication.
EVIDENCE
He describes “context and enrichment” as a key value, noting that TSPs provide “information that multiple of us as TSPs are able to provide, collate and make it available” and that this context can be used by banks for authentication and fraud decisions [101-110].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The speaker’s description of TSPs providing enriched contextual data through open APIs matches a detailed account of such APIs enabling fraud detection and authentication [S1]; an additional source on open-API platforms confirms the value-add of contextual enrichment [S25].
MAJOR DISCUSSION POINT
Contextual data enrichment via open APIs
AGREED WITH
Rahul Vatts, Mansi Kedia, Deepak Maheshwari
Argument 2
Open APIs prevent duplication of effort
EXPLANATION
Speaker 1 argues that TSPs collaborate to expose open APIs, avoiding parallel infrastructures and ensuring that multiple financial institutions can access the same data sources. This collaborative approach reduces redundancy and streamlines digital lending and fraud prevention.
EVIDENCE
He notes that “we work in conjunction, put them in order, collaborate with the likes of COI and DOT to set up infrastructure as open APIs and then allow these APIs as interfaceable for institutions” and that these APIs support digital lending for over 1,100 banks [118-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emphasis on collaborative open APIs to avoid parallel infrastructures is reflected in the same discussion about operator-led capabilities complementing GSMA APIs [S1]; a separate analysis of open-API ecosystems highlights how shared interfaces reduce redundancy [S25].
MAJOR DISCUSSION POINT
Open APIs prevent duplication of effort
DISAGREED WITH
Debashish Chakraborty, Rahul Vatts
Argument 3
Control‑plane and operational sovereignty concerns
EXPLANATION
Speaker 1 highlights that data residency alone does not guarantee sovereignty; control‑plane location, operational control, and regulatory jurisdiction are critical. He calls for clear standards or playbooks to manage these aspects for AI‑enabled networks.
EVIDENCE
He discusses that “the control plane of that cloud within India or not…” and that “operational sovereignty” is linked to data privacy laws, noting the need for a “referenceable standard” and playbooks to guide industry practice [284-288][295-300].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The four-slice sovereignty model (including control-plane location) is cited as a reference for standards and playbooks, aligning with Rahul’s framework [S1]; a practical guide to cloud computing stresses the importance of control-plane jurisdiction for true sovereignty [S26]; legal-jurisdiction concerns are further elaborated in a study of cross-border cloud evidence [S28].
MAJOR DISCUSSION POINT
Control‑plane and operational sovereignty concerns
AGREED WITH
Rahul Vatts, Deepak Maheshwari, Julian Gorman
DISAGREED WITH
Rahul Vatts, Deepak Maheshwari
Argument 4
Regulatory need for explainability & digital‑intermediary rules
EXPLANATION
Speaker 1 stresses that AI systems used by telcos must be explainable and that digital‑intermediary regulations need to evolve to cover new data uses. He calls for industry‑wide standards, blueprints, or playbooks to ensure transparency and compliance.
EVIDENCE
He states that “we would want the AI to explain… why did I block you?” and that “we need a referenceable standard… could be blueprint, could be playbooks” and that “digital intermediary rules” are an emerging regulatory area [285-293].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for AI explainability and digital-intermediary regulation are supported by a high-level AI standards panel that stresses transparent, accountable AI systems and coordinated standards work [S16]; a session on responsible AI highlights the need for repositories and standards to build consumer trust [S5].
MAJOR DISCUSSION POINT
Regulatory need for explainability & digital‑intermediary rules
AGREED WITH
Rahul Vatts, Deepak Maheshwari
DISAGREED WITH
Rahul Vatts
D
Deepak Maheshwari
4 arguments172 words per minute1833 words637 seconds
Argument 1
Sovereignty beyond localization; standards participation
EXPLANATION
Deepak argues that data sovereignty must extend beyond physical storage to include control over standards, decision‑making systems, and long‑term strategy. He advocates active participation in multistakeholder standard bodies rather than trying to control them unilaterally.
EVIDENCE
He outlines that sovereignty is not only about “physical location of the data” but also about “local context” and strategic control, referencing past Indian policies and the need to contribute to standards via bodies like GSMA, CGPP, ISO, ITU, IEEE [141-150][155-162].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The argument that sovereignty includes strategic control over standards mirrors the digital-sovereignty narrative about managing infrastructure and standards globally [S8]; multistakeholder standards coordination is discussed as essential for comprehensive coverage [S16]; open, free internet governance emphasizes inclusive participation in standards bodies [S19].
MAJOR DISCUSSION POINT
Sovereignty beyond localization; standards participation
AGREED WITH
Speaker 1, Rahul Vatts
DISAGREED WITH
Rahul Vatts, Speaker 1
Argument 2
Multistakeholder standard participation as regulatory path
EXPLANATION
Deepak emphasizes that engaging in multistakeholder standard‑setting processes (e.g., GSMA, ISO, ITU) is the pragmatic way to achieve digital sovereignty while remaining compatible with global systems. He notes that contributions inevitably involve compromise but can yield net benefits.
EVIDENCE
He says that “whether it is GSMA or CGPP, ISO, ITU, IEEE… they all have mechanisms of people and countries to participate in that decision making” and that “the effort should be about contributing to that standard making as a participant” [175-178].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of engaging with GSMA, ISO, ITU, IEEE for shaping standards is echoed in a panel on AI standards coordination that calls for broad stakeholder involvement [S16]; the IGF opening remarks underline the need for multi-stakeholder governance to maintain an open, interoperable Internet [S19].
MAJOR DISCUSSION POINT
Multistakeholder standard participation as regulatory path
Argument 3
Open protocol, no IP fees for Global South adoption
EXPLANATION
Deepak points out that India’s DPI model is offered as an open protocol without intellectual property fees, allowing other countries to adopt, adapt, and scale the framework freely. This openness differentiates India’s approach from proprietary solutions that charge per‑user or per‑population fees.
EVIDENCE
He explains that “India’s DPI-led model… nothing of that sort is going… it’s a framework… open protocol… they can adopt it and change it the way they wish” and that “India doesn’t ask for that type of thing” unlike other providers that charge scaling fees [318-327].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
An overview of open-API platforms notes that open protocols without licensing fees lower entry barriers for developing economies, aligning with the claim of an IP-free DPI model [S25]; guidelines on open banking illustrate how open standards can be adopted without per-user fees, supporting the argument [S24].
MAJOR DISCUSSION POINT
Open protocol, no IP fees for Global South adoption
AGREED WITH
Julian Gorman, Mansi Kedia, Rahul Vatts
Argument 4
Reciprocal data‑embassy support
EXPLANATION
Deepak briefly affirms that data‑embassy arrangements could be supported on a reciprocal basis, implying mutual agreements between nations for data protection and sovereignty.
EVIDENCE
He responds succinctly, “I would say yes if it is on reciprocal basis” [375].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Technical discussions on data embassies describe reciprocal agreements between nations to protect data sovereignty, matching the speaker’s affirmative stance on reciprocal data-embassy arrangements [S30].
MAJOR DISCUSSION POINT
Reciprocal data‑embassy support
M
Mansi Kedia
2 arguments171 words per minute953 words334 seconds
Argument 1
Blueprints vs standards for inclusive DPI
EXPLANATION
Mansi differentiates between prescriptive standards and flexible blueprints, arguing that the World Bank promotes adaptable blueprints that incorporate best practices while allowing local customization. She sees blueprints as essential for inclusive, interoperable DPI deployment.
EVIDENCE
She states that “the World Bank works more towards the ideas of blueprints… bring together best practices from different countries and see how they can be made more adaptable” and contrasts this with “standards then get into ideas of commercialization” [219-226].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
World Bank-led initiatives that favour adaptable blueprints over rigid standards are highlighted in a policy note on inclusive DPI deployment, aligning with the speaker’s distinction between blueprints and standards [S19]; open-banking guideline documents also reference blueprint-style frameworks for interoperability [S24].
MAJOR DISCUSSION POINT
Blueprints vs standards for inclusive DPI
AGREED WITH
Rahul Vatts, Speaker 1, Deepak Maheshwari
DISAGREED WITH
Julian Gorman
Argument 2
India’s DPI evidence guides other economies
EXPLANATION
Mansi highlights India’s extensive experience and scale in DPI as valuable evidence for other emerging economies. She notes that multiple organizations and the government are sharing lessons, tools, and collaborations (e.g., Finternet with BIS) to help other countries adopt similar models.
EVIDENCE
She mentions that “India, surely for the vast amount of experience and scale… offers excellent evidence on what works and what doesn’t work” and cites collaborations such as “Finternet” with BIS and the role of mobile data for planning and mobility [355-362][363-365].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s extensive DPI experience, including its role in UPI and large-scale telecom infrastructure, is cited as a benchmark for other economies in a report on cross-border digital payments and industrial innovation [S21]; a statement on India’s unique position to balance openness and sovereignty reinforces this evidence [S8].
MAJOR DISCUSSION POINT
India’s DPI evidence guides other economies
AGREED WITH
Julian Gorman, Deepak Maheshwari, Rahul Vatts
A
Audience
1 argument144 words per minute186 words77 seconds
Argument 1
Ring‑based KYC & data‑embassy proposal
EXPLANATION
An audience member proposes a wearable ring that stores a person’s KYC and medical records, encrypted and accessible only with consent, and suggests using blockchain to create a data‑embassy that protects the data when the device leaves the body.
EVIDENCE
The participant describes “a ring kind of product where the privacy data the KYC data resides on that physically only on that item… if it leaves the body it leaves in an encrypted form only and it can be collated with another key for the purpose for which it has been consent” and adds that this relates to “data embassies” [371-374].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of storing KYC and medical records on a wearable ring with encrypted access, linked to data-embassy protection, is discussed in a technical brief on data embassies and cryptographic safeguards for personal devices [S30]; earlier remarks about a similar ring-based KYC concept appear in a session on data-embassy ideas [S8].
MAJOR DISCUSSION POINT
Ring‑based KYC & data‑embassy proposal
Agreements
Agreement Points
Mobile networks are evolving into intelligent, AI‑enabled public infrastructure layers that actively shape services, fraud prevention and digital identity.
Speakers: Julian Gorman, Debashish Chakraborty, Rahul Vatts, Speaker 1
Network as AI‑enabled public layer Trusted, interoperable network needed Airtel’s scale builds trust via OTP & sovereign cloud Contextual data enrichment via open APIs
All speakers describe the transition of telecom networks from passive connectivity to intelligent platforms that host AI models, provide real-time services such as OTPs, digital identity verification and fraud mitigation, and act as trusted layers of national digital infrastructure [13-15][38-41][55-62][101-110].
Strategic control over infrastructure and standards is essential to achieve digital sovereignty and avoid fragmentation.
Speakers: Julian Gorman, Debashish Chakraborty, Rahul Vatts, Speaker 1, Deepak Maheshwari, Mansi Kedia
Strategic control through global standards Need to avoid parallel DPI layers Four‑slice model of data sovereignty Control‑plane and operational sovereignty concerns Sovereignty beyond localization; standards participation Blueprints vs standards for inclusive DPI
Speakers stress that sovereignty in an AI-driven world requires control over standards, APIs and the intelligence layer, not merely data localisation; fragmentation slows progress and interoperable, open standards or blueprints are needed to scale safely [19-25][76-78][235-250][284-288][175-178][219-226].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with concerns about digital fragmentation highlighted in the State of Digital Fragmentation report, which calls for dialogue to prevent divergent standards [S44], and reflects the tension between digital sovereignty and global interoperability discussed at IGF 2025 [S42]. It also echoes calls for adapting global standards to local contexts to preserve strategic control [S49].
Data sovereignty must go beyond physical data residency to include control‑plane location, operational control and participation in standard‑setting processes.
Speakers: Rahul Vatts, Deepak Maheshwari, Speaker 1, Julian Gorman
Four‑slice model of data sovereignty Sovereignty beyond localization; standards participation Control‑plane and operational sovereignty concerns Strategic control through global standards
All agree that true sovereignty requires more than storing data in-country; it also demands jurisdiction over the cloud control plane, software updates, legal authority and active contribution to multistakeholder standards bodies [235-250][175-178][284-288][19-21].
POLICY CONTEXT (KNOWLEDGE BASE)
The concept of data embassies that protect nations by locating control-plane functions abroad supports this broader view of sovereignty [S53]; discussions on balancing data sovereignty with the benefits of global cloud infrastructure further stress the need to address control-plane issues [S54]; and trade-policy analyses warn against simplistic data-localisation without considering operational control [S56].
Open APIs and contextual data enrichment are critical to avoid duplication of effort and to enable services such as digital lending, fraud detection and financial inclusion.
Speakers: Rahul Vatts, Speaker 1, Mansi Kedia, Deepak Maheshwari
Bank APIs for digital lending illustrate open standards Contextual data enrichment via open APIs Blueprints vs standards for inclusive DPI Open protocol, no IP fees for Global South adoption
Speakers highlight that exposing enriched, contextual data through open, interoperable APIs lets multiple institutions (banks, regulators) reuse the same data, prevents parallel DPI stacks and supports inclusive financial services [65-67][101-110][219-226][318-327].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple DPI workshops underline the importance of open APIs for innovation and to prevent duplicated data collection, as noted in the WSIS e-agriculture session [S61] and the inclusive digitalisation analysis that stresses open data to eliminate redundancy [S60]; governance briefs also highlight open APIs as a core governance need for equitable DPI [S48].
India’s DPI model, built on open, scalable, and interoperable digital rails, can serve as a blueprint for other emerging economies.
Speakers: Julian Gorman, Deepak Maheshwari, Mansi Kedia, Rahul Vatts
Strategic control through global standards Open protocol, no IP fees for Global South adoption India’s DPI evidence guides other economies Airtel’s scale builds trust via OTP & sovereign cloud
All note that India’s experience with UPI, massive telecom infrastructure and an open DPI framework provides evidence and a replicable model for the Global South, emphasizing openness, scalability and lack of licensing fees [11-12][318-327][355-362][55-62].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s digital public infrastructure is repeatedly cited as a scalable blueprint for the Global South, both in the AI-focused briefing on India’s digital future [S45] and in the emerging norms discussion on DPI [S43]; UN-aligned policy briefs also reference India’s model as a reference for sustainable development [S47].
Regulatory frameworks must evolve to address AI explainability, digital‑intermediary responsibilities and jurisdictional challenges.
Speakers: Speaker 1, Rahul Vatts, Deepak Maheshwari
Regulatory need for explainability & digital‑intermediary rules Jurisdictional challenges under foreign cloud laws Sovereignty beyond localization; standards participation
Speakers call for new regulations that ensure AI decisions are explainable, clarify the scope of digital-intermediary duties, and mitigate cross-border legal exposure such as the US CLOUD Act [285-293][236-250][175-178].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for AI explainability within sovereignty debates was highlighted in the IGF 2025 session on AI regulation [S42]; subsequent panels on digital-intermediary duties stress adapting global standards to local legal frameworks [S49]; and DPI governance consensus documents call for updated regulatory mechanisms to manage cross-border jurisdictional issues [S48].
Similar Viewpoints
Both emphasize that data residency alone is insufficient; sovereignty also depends on where the cloud control plane resides, who can patch software and which legal jurisdiction applies [235-250][284-288].
Speakers: Rahul Vatts, Speaker 1
Four‑slice model of data sovereignty Control‑plane and operational sovereignty concerns
Both argue that digital sovereignty is achieved through strategic control of standards and participation in multistakeholder bodies rather than through isolationist policies [19-21][175-178].
Speakers: Julian Gorman, Deepak Maheshwari
Strategic control through global standards Sovereignty beyond localization; standards participation
Both promote an open, adaptable approach (blueprints or open protocols) that avoids licensing fees and enables other countries to adopt and customise India’s DPI model [219-226][318-327].
Speakers: Mansi Kedia, Deepak Maheshwari
Blueprints vs standards for inclusive DPI Open protocol, no IP fees for Global South adoption
Unexpected Consensus
Use of wearable devices as personal data embassies
Speakers: Audience, Rahul Vatts, Deepak Maheshwari
Ring‑based KYC & data‑embassy proposal Aadhaar security & data‑embassy discussion Reciprocal data‑embassy support
While the audience introduced a novel ring-based KYC concept, both Rahul and Deepak quickly aligned with the idea of data embassies-Rahul noting Aadhaar’s work on data embassies and Deepak affirming reciprocal embassy arrangements-showing an unexpected convergence between a consumer-level proposal and high-level policy perspectives [371-374][376][375].
POLICY CONTEXT (KNOWLEDGE BASE)
Security reports warn that personal data from wearables can be exploited, underscoring the need for protective architectures such as data embassies [S51]; the concept of digital embassies extending to personal devices is explored in the data-embassy literature, which proposes using such devices as sovereign data shelters [S53].
Overall Assessment

There is strong consensus that telecom networks are now AI‑enabled public infrastructure requiring trusted, interoperable, and standards‑driven operation. Participants agree that data sovereignty must extend beyond mere localisation to include control‑plane, operational and legal dimensions, and that open APIs, contextual enrichment and open‑protocol DPI models are essential to avoid duplication and to scale solutions globally, especially for the Global South.

High consensus across technical, regulatory and policy dimensions, indicating a shared vision that can underpin coordinated actions on standards, open‑source DPI frameworks and sovereign cloud strategies.

Differences
Different Viewpoints
Definition and pathway to data sovereignty
Speakers: Rahul Vatts, Deepak Maheshwari, Speaker 1
Four‑slice model of data sovereignty Sovereignty beyond localization; standards participation Control‑plane and operational sovereignty concerns
Rahul proposes a concrete four-slice framework that looks at physical residency, control-plane jurisdiction, operational control and foreign legal reach as the core of data sovereignty [235-250]. Deepak argues that sovereignty must go beyond localisation, stressing strategic control over standards and active participation in multistakeholder bodies, and presenting India’s DPI as an open protocol without IP fees [141-150][155-162]. Speaker 1 echoes the need to consider control-plane and operational aspects, calling for referenceable standards or playbooks to manage AI-enabled networks [284-288][295-300]. The disagreement centres on whether sovereignty is best achieved through a technical-jurisdictional slice model (Rahul) or through standards participation and open protocols (Deepak), with Speaker 1 highlighting the need for industry-wide standards to address the same concerns.
POLICY CONTEXT (KNOWLEDGE BASE)
Ongoing debates about the precise definition of data sovereignty appear in the State of Digital Fragmentation analysis, which calls for common ground on governance concepts [S44]; the Data Sovereignty India AI Impact Summit notes tactical disagreements over definition while maintaining strategic alignment [S55].
Preferred normative instrument for DPI – global standards vs. flexible blueprints
Speakers: Julian Gorman, Mansi Kedia
Strategic control through global standards Blueprints vs standards for inclusive DPI
Julian stresses that interoperable global standards are essential to avoid technical, regulatory or geopolitical fragmentation and to ensure safe, interoperable AI-enabled infrastructure [22-25]. Mansi counters that the World Bank favours adaptable blueprints that bring together best practices while allowing local customisation, arguing that standards can be overly prescriptive and commercialised [219-226][227-233]. The disagreement is on the instrument that should guide DPI deployment – hard, globally-harmonised standards versus flexible, context-specific blueprints.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between adopting global standards and allowing local flexibility was a key theme in the Emerging Norms for DPI workshop [S43] and the Global Standards vs Local Adaptation discussion at IGF [S49]; concerns about fragmentation reinforce the argument for flexible blueprints [S44].
Risk of parallel DPI layers and duplication of effort
Speakers: Debashish Chakraborty, Rahul Vatts, Speaker 1
Need to avoid parallel DPI layers Airtel’s scale builds trust via OTP & sovereign cloud Open APIs prevent duplication of effort
Debashish asks how to ensure MNO-led capabilities complement rather than duplicate GSMA OpenGateway APIs [76-78]. Rahul highlights Airtel’s massive proprietary infrastructure and sovereign cloud offering as a trust layer, which could operate independently of public DPI initiatives [239-244][262-270]. Speaker 1 argues that telecom service providers already collaborate through open APIs, providing contextual data to multiple institutions and thus avoiding parallel infrastructures [118-124]. The disagreement lies in whether private sovereign-cloud solutions risk creating duplicate DPI layers versus a collaborative open-API model that mitigates duplication.
POLICY CONTEXT (KNOWLEDGE BASE)
Cybersecurity forums stress the importance of collaboration to avoid duplicated efforts, a principle that applies to DPI layering as well [S59]; policy analyses on inclusive digitalisation highlight open data as a means to prevent parallel infrastructures [S60]; and fragmentation reports warn of the dangers of multiple, uncoordinated DPI stacks [S44].
Regulatory priority – explainability and digital‑intermediary rules vs. data residency/control
Speakers: Speaker 1, Rahul Vatts
Regulatory need for explainability & digital‑intermediary rules Four‑slice model of data sovereignty
Speaker 1 calls for industry-wide explainability standards and clear digital-intermediary regulations, proposing playbooks or blueprints to ensure AI decisions can be justified and compliance can be demonstrated [285-293][304-311]. Rahul focuses on the technical-jurisdictional dimensions of sovereignty, asserting that data residency, control-plane location and jurisdictional authority are the primary regulatory concerns, with less emphasis on explainability [235-250]. The disagreement is about which regulatory aspect should be prioritised: AI explainability and intermediary governance versus data-control mechanisms.
POLICY CONTEXT (KNOWLEDGE BASE)
The IGF 2025 AI regulation session prioritized explainability and intermediary accountability, while other policy briefs argue that data residency remains a core sovereignty concern, illustrating the trade-off highlighted in the balancing-data-sovereignty discussion [S54] and the global-vs-local standards debate [S49].
Unexpected Differences
Sovereign cloud as a complete solution to data sovereignty vs. lingering control‑plane concerns
Speakers: Rahul Vatts, Speaker 1
Four‑slice model of data sovereignty Sovereign cloud offering supports scalable rollout Control‑plane and operational sovereignty concerns
Rahul presents Airtel’s sovereign cloud as a way to keep critical data within national jurisdiction and to support scalable AI services, implying that the sovereign cloud resolves most sovereignty issues [239-244][262-270]. Speaker 1, however, points out that even with data residency, the location of the control-plane and operational control remain critical, and calls for referenceable standards to address these aspects [284-288][295-300]. The unexpected tension is that a private sovereign-cloud offering, touted as a sovereignty solution, may still leave key control elements outside the country, contrary to Rahul’s implication.
POLICY CONTEXT (KNOWLEDGE BASE)
The data-embassy concept points out that sovereign clouds may still leave control-plane functions abroad, challenging the notion of a complete solution [S53]; analyses of data sovereignty versus global cloud benefits stress the need to address control-plane location explicitly [S54]; and trade-policy insights caution against relying solely on localisation without operational control [S56].
Overall Assessment

The discussion reveals broad consensus that telecom networks must become intelligent, trusted platforms for AI‑driven public services. However, speakers diverge on how to achieve data sovereignty, the appropriate normative tools for DPI (global standards vs. flexible blueprints), the risk of parallel private sovereign‑cloud solutions creating duplicate DPI layers, and the regulatory focus—whether on explainability and digital‑intermediary rules or on technical‑jurisdictional control. These disagreements are substantive but not antagonistic; they reflect different institutional perspectives (government, regulator, operator, multilateral development bank) rather than outright conflict.

Moderate. The disagreements are primarily about implementation pathways and normative instruments rather than fundamental goals. This suggests that while the community shares a common vision of secure, AI‑enabled digital public infrastructure, coordination will be needed to reconcile technical‑jurisdictional frameworks, standard‑blueprint choices, and the balance between private sovereign‑cloud offerings and open‑API collaboration. The implications are that policy harmonisation, joint standard‑development processes, and clear regulatory guidance will be essential to avoid fragmentation and to realise the trusted, interoperable network vision.

Partial Agreements
All four speakers agree that modern telecom networks must evolve into intelligent, trusted platforms that support AI, digital identity, fraud mitigation and enable digital public services. Julian frames the network as an AI‑enabled public layer [13-15]; Debashish stresses the need for trusted, interoperable networks to avoid fragmentation [38-41]; Rahul points to Airtel’s massive infrastructure and sovereign cloud as the foundation of trust for citizen‑centric services [55-62]; Speaker 1 highlights the value of contextual data enrichment through open APIs to make the network useful for financial inclusion and security [101-110]. The shared goal is a trusted, intelligent network, but the proposed means differ – programmable infrastructure, interoperability, sovereign cloud scale, or open‑API data enrichment.
Speakers: Julian Gorman, Debashish Chakraborty, Rahul Vatts, Speaker 1
Network as AI‑enabled public layer Trusted, interoperable network needed Airtel’s scale builds trust via OTP & sovereign cloud Contextual data enrichment via open APIs
All three agree that some common framework is needed to avoid duplication and promote interoperability. Mansi advocates flexible blueprints that can be adapted locally [219-226][227-233]; Speaker 1 stresses the practical role of open APIs as a way to share contextual data and prevent parallel infrastructures [118-124]; Julian argues that harmonised global standards are essential to prevent fragmentation [22-25]. The consensus is on the necessity of a shared approach, but the preferred format – blueprints, open APIs, or global standards – diverges.
Speakers: Mansi Kedia, Speaker 1, Julian Gorman
Blueprints vs standards for inclusive DPI Open APIs prevent duplication of effort Strategic control through global standards
Takeaways
Key takeaways
Telecom networks are transitioning from simple connectivity providers to intelligent, AI‑enabled platforms that deliver trust, fraud mitigation, digital identity verification and contextual data enrichment. Networks are becoming a trusted layer of national digital public infrastructure (DPI), supporting services such as UPI, Aadhaar‑based payments, OTP/SMS, and emergency response. Data sovereignty in an AI‑driven era extends beyond physical data localisation; it includes control‑plane ownership, operational sovereignty, participation in global standards, and strategic autonomy. Open, interoperable standards and APIs (e.g., GSMA OpenGate) are essential to avoid parallel or duplicated DPI layers and to enable scalable, inclusive digital services. Regulatory and policy frictions are emerging around AI explainability, accountability, digital‑intermediary obligations, and jurisdictional reach of foreign cloud laws (e.g., US CLOUD Act). India’s DPI model—open protocols, no IP licensing fees, sovereign cloud offerings, and diplomatic support—offers a scalable template for the Global South seeking digital sovereignty without isolation. Collaboration between public agencies, telcos, fintechs, and multilateral bodies (World Bank, BIS, GSMA) is critical to develop blueprints, playbooks, and standards for AI‑enabled DPI. Innovative data‑protection ideas such as personal‑device KYC storage (ring‑based) and data‑embassy concepts were raised, highlighting the need for secure, user‑controlled data architectures.
Resolutions and action items
Continue joint work on open APIs (e.g., GSMA OpenGate) to ensure new AI‑driven services complement existing DPI layers rather than duplicate them. Encourage telcos to develop and commercialise sovereign‑cloud offerings that keep critical citizen data under domestic control while leveraging hyperscaler efficiencies for non‑critical workloads. Promote active participation of Indian stakeholders in global standard‑setting bodies (GSMA, ISO, ITU, IEEE) to shape AI and DPI standards rather than merely consume them. Explore the feasibility of ‘data embassies’ and reciprocal data‑sovereignty arrangements, with a view to drafting a policy brief or pilot framework. Leverage diplomatic and development channels (e.g., Ministry of External Affairs, Indian Council of World Affairs, World Bank) to share India’s DPI blueprint with Global South partners. Develop sector‑specific playbooks/blueprints (e.g., for digital lending, fraud detection) that codify best‑practice APIs and governance models.
Unresolved issues
Concrete regulatory mechanisms for AI explainability and accountability in network‑level decision‑making remain undefined. How to reconcile jurisdictional control of data stored in foreign hyperscaler clouds with national sovereignty requirements. Specific processes to prevent the creation of parallel DPI infrastructures across multiple MNOs were discussed but not finalized. Implementation details, governance models, and legal frameworks for data‑embassy concepts were only briefly mentioned. Clarification of digital‑intermediary obligations for telcos when they process data for secondary purposes (e.g., credit scoring) is still pending.
Suggested compromises
Adopt a ‘selective residency’ approach: keep critical public‑interest data (e.g., KYC, health, defence) within national control while allowing non‑critical workloads to run on global hyperscalers for efficiency. Focus on contributing to and shaping international standards rather than attempting to control them outright, thereby balancing sovereignty with global interoperability. Offer India’s DPI framework under open‑protocol terms without IP licensing fees, enabling Global South adoption while allowing local customisation. Use collaborative, multistakeholder playbooks/blueprints that provide guidance without being overly prescriptive, allowing countries to adapt solutions to their contexts.
Thought Provoking Comments
In an AI‑driven world, sovereignty is no longer just about where the data is stored, it’s about having strategic control over the infrastructure… Countries want to know how to build AI‑enabled public infrastructure that is safe, interoperable, and aligned with national priorities, while still remaining connected to global markets.
Re‑frames the traditional notion of data sovereignty, shifting it from a purely geographic concern to a broader strategic control issue that includes standards, AI models, and governance. This sets the conceptual agenda for the whole panel.
Established the central theme of the session, prompting subsequent speakers to address sovereignty from technical, regulatory and policy angles. It led directly to Deepak’s historical ‘walls’ metaphor and Rahul’s four‑slice definition of sovereignty.
Speaker: Julian Gorman
India transacted 28 lakh crore rupees through UPI in January – all on the connectivity layer. The network is not just plumbing; it is the heart of trust, providing OTPs, Aadhaar‑enabled payments, and even scoring customers for digital lending in milliseconds.
Provides concrete, high‑impact data that illustrates how telecom infrastructure underpins financial inclusion and trust, moving the discussion from abstract concepts to measurable outcomes.
Grounded the conversation in real‑world scale, prompting the panel to explore how AI can further embed trust (e.g., fraud detection, spam mitigation) and leading to the later debate on open APIs and sovereign cloud offerings.
Speaker: Rahul Vatts
Context and enrichment are the keys. The DPI framework provides raw data, but TSPs add contextual information (e.g., location vs. call) that lets banks or authentication services make smarter decisions. We expose this via open APIs for anyone to consume.
Introduces the idea that telecom data gains value only when enriched with context, and that open APIs can democratise that value across sectors.
Shifted the dialogue from “what data we have” to “how we share it responsibly”. Sparked follow‑up remarks about GSMA OpenGateway APIs and the need for standards to govern such contextual data exchanges.
Speaker: Speaker 1 (representing Vodafone Idea / TSPs)
Sovereignty is not about building walls that block two‑way traffic. It’s about contributing to global standards (GSMA, ISO, ITU, etc.) so that we get more than we give. We must accept give‑and‑take because international organisations require shared governance.
Uses a vivid ‘walls’ metaphor to critique protectionist approaches and advocates for active participation in standard‑setting, reframing sovereignty as collaborative rather than isolationist.
Reoriented the conversation toward multilateral engagement, influencing Mansi’s distinction between standards and blueprints and reinforcing the call for India to be a contributor, not just a consumer, of global norms.
Speaker: Deepak Maheshwari
Data sovereignty can be broken into four slices: (1) data residency, (2) control‑plane location, (3) operational sovereignty (where patches and software updates originate), and (4) jurisdictional sovereignty (e.g., US CLOUD‑ACT). Merely storing data locally is insufficient.
Provides a clear, actionable framework that moves the abstract debate into concrete policy and technical dimensions, highlighting gaps in current implementations.
Prompted deeper examination of sovereign cloud offerings, led to discussion of Airtel’s own sovereign cloud, and underscored the regulatory friction points later raised by the Vodafone Idea representative.
Speaker: Rahul Vatts
Standards are prescriptive; blueprints are flexible. The World Bank focuses on blueprints that capture best practices but allow adaptation to local contexts, avoiding the rigidity of strict standards.
Clarifies a common confusion between standards and blueprints, offering a pragmatic pathway for emerging economies to adopt DPI without being locked into one‑size‑fits‑all solutions.
Added nuance to the standards debate, influencing the panel’s view on how India’s DPI model can be exported to the Global South while respecting local variations.
Speaker: Mansi Kedia
India’s DPI model is open, with no IP royalties. It can be transferred to other countries as a framework, supported by diplomatic channels (e.g., Ministry of External Affairs, Indian Council of World Affairs). This openness differentiates it from proprietary solutions.
Highlights the strategic advantage of an open, non‑monetised model for scaling digital sovereignty globally, linking technology to soft diplomacy.
Expanded the conversation from technical standards to geopolitical strategy, reinforcing the earlier point about collaborative sovereignty and encouraging other participants to consider how to replicate the model in the Global South.
Speaker: Deepak Maheshwari
Audience suggestion: a wearable ring that stores KYC/medical data locally, encrypted, with blockchain‑based consent and the concept of ‘data embassies’ for sovereign data storage.
Introduces a novel, user‑centric embodiment of data sovereignty that merges hardware, cryptography, and blockchain, pushing the discussion into future product possibilities.
Prompted brief reactions from Deepak and Rahul, underscoring the need for secure personal data vaults and hinting at emerging use‑cases beyond network‑level solutions.
Speaker: Vijay Agarwal (audience)
Overall Assessment

The discussion was steered by a series of pivotal insights that moved it from high‑level rhetoric to concrete, actionable ideas. Julian’s redefinition of sovereignty set the thematic foundation, which was then grounded by Rahul’s real‑world scale of UPI and the trust layer. The introduction of context‑enriched data and open APIs (Speaker 1) opened a technical pathway, while Deepak’s ‘walls’ metaphor and Rahul’s four‑slice sovereignty framework reframed the policy debate around collaboration versus isolation. Mansi’s clarification of standards versus blueprints and Deepak’s emphasis on an open, non‑IP model provided a pragmatic roadmap for exporting India’s DPI to the Global South. Finally, the audience’s wearable‑data concept illustrated how these ideas could manifest in future consumer products. Collectively, these comments shifted the conversation from abstract notions of digital public infrastructure to a nuanced, multi‑dimensional view that integrates technology, regulation, standards, and geopolitics, shaping a forward‑looking agenda for both India and emerging economies.

Follow-up Questions
How can MNOs ensure that new DPI trust layers complement rather than duplicate existing operator‑led capabilities such as Open Gateway APIs?
Addresses the risk of parallel digital infrastructure and promotes interoperability and efficient use of resources.
Speaker: Debashish Chakraborty
How should India define data sovereignty in an AI‑driven DPI era beyond data localization, including control over standards, decision‑making systems, and long‑term strategic autonomy?
Seeks a comprehensive policy framework that captures the full spectrum of sovereignty in an AI‑centric environment.
Speaker: Deepak Maheshwari
What are the risks when public digital infrastructure and private digital capabilities are built in silos, and why are global standards essential for inclusive digital outcomes?
Highlights potential inefficiencies, security gaps, and missed innovation opportunities without coordinated standards.
Speaker: Mansi Kedia
What does data sovereignty practically mean for operators regarding data storage, edge processing, cloud reliance, and control of AI models?
Clarifies operational implications for telecoms to ensure true sovereignty beyond mere data residency.
Speaker: Rahul Vatts
What are the biggest policy frictions emerging as networks become AI‑driven platforms, and how can data‑sovereignty frameworks address these regulatory challenges without slowing innovation?
Identifies regulatory gaps and seeks solutions that balance innovation with sovereign safeguards.
Speaker: Martin (Vodafone Idea)
How can India leverage its DPI and telecom‑led digital architecture to provide a credible, scalable model for the Global South seeking digital sovereignty without technological isolation?
Explores exportability of India’s model and its relevance for developing economies.
Speaker: Deepak Maheshwari
How will India’s DPI model shape digital development strategies across emerging economies?
Assesses the influence of India’s experience on policy and implementation in other emerging markets.
Speaker: Mansi Kedia
Why don’t we have a wearable product (e.g., a ring) that stores KYC/medical data locally with encryption and blockchain‑based data‑embassy features, and could India consider offering data embassies?
Proposes a novel privacy‑preserving solution and raises the concept of data embassies for sovereign data storage.
Speaker: Vijay Agarwal (audience)
What standards or referenceable frameworks are needed to enable explainable, accountable AI in telecom‑driven fraud and spam mitigation?
Calls for technical standards to balance security with AI transparency.
Speaker: Martin (Vodafone Idea)
How can sovereign cloud offerings be designed to ensure data residency, control‑plane sovereignty, and jurisdictional protection while still leveraging hyperscale efficiencies?
Seeks a design that reconciles sovereign requirements with the benefits of large‑scale cloud services.
Speaker: Rahul Vatts
What mechanisms can ensure that telecom operators, as digital intermediaries, comply with emerging data‑privacy regulations while still providing value‑added AI services?
Addresses regulatory compliance for operators expanding into AI‑enabled services.
Speaker: Martin (Vodafone Idea)
What research is needed to assess the impact of AI‑driven network functions on explainability versus security trade‑offs (e.g., auto‑blocking scams)?
Investigates how to maintain security without sacrificing AI explainability.
Speaker: Martin (Vodafone Idea)
How can open, interoperable protocols be structured to avoid duplication and ensure seamless integration of DPI services across multiple operators?
Aims to prevent fragmented solutions and promote a unified digital public infrastructure.
Speaker: Martin (Vodafone Idea)
What are the implications of data sovereignty on intellectual property rights and licensing when exporting India’s DPI model to other countries?
Considers legal and IP challenges in sharing India’s open‑protocol DPI framework internationally.
Speaker: Deepak Maheshwari
What role can multilateral institutions play in establishing data embassies and reciprocal data‑sovereignty arrangements?
Explores international cooperation mechanisms for sovereign data storage and exchange.
Speaker: Deepak Maheshwari

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

AI-Driven Enforcement_ Better Governance through Effective Compliance & Services

AI-Driven Enforcement_ Better Governance through Effective Compliance & Services

Session at a glanceSummary, keypoints, and speakers overview

Summary

The symposium, organized by the Income Tax Department, focused on how artificial intelligence can enhance enforcement, compliance and citizen services in tax administration and broader law-enforcement contexts [5][8]. Chairman Ravi Agrawal highlighted that the upcoming Income Tax Act 2025 will create a technology-driven ecosystem, reducing interpretative ambiguity and enabling AI-based algorithms to improve tax certainty and lower litigation [32][33][34]. He emphasized that AI can amplify human capability by turning large data sets into insights, automating routine tasks, and that responsible deployment requires high-quality data, secure systems, accountability and continuous training [35][36][43]. Recent AI pilots in the department have already generated significant results, with targeted nudges prompting 1.11 crore taxpayers to file updated returns and uncovering foreign assets worth ₹99 000 crore and foreign income of ₹6 500 crore [70][71].


In the industry-academia track, Project Insight 2.0 was presented as an AI-enabled compliance platform that will provide quick, accurate information to taxpayers, improve NERJ campaigns, and use large language models to tag and predict litigation risk [88][92][93][96]. LTI’s Ramesh Revuru introduced “Bharatverse,” an Indianized multi-agent platform built on pre-assembled foundational, data, knowledge, orchestration and consumption layers, enabling faster development of domain-specific agents for the CBDT [110][112][115][116]. Technical lead T. Srinivasan explained the creation of a sovereign small-language model (SLM) fine-tuned via low-rank adaptation (LoRA) on tax-specific data, integrated with vector databases and ontologies to support multilingual, context-aware chatbots and automated compliance checks [129][132-135][141-144][148-152].


Professor Mausam broadened the discussion to law-enforcement AI, citing use cases such as facial-recognition-based crime reduction, satellite-imagery monitoring, multimodal analytics for fraud detection, and warning that bias, over-triggering and loss of human oversight must be mitigated [226-233][258-263][291-303]. Martin Wilcox described next-generation AI risk analytics, stressing the need for scalable graph analytics and multimodal data processing, and illustrated how “Bring Your Own Model” and in-warehouse inference can accelerate credit-risk scoring by 25-fold [324-327][329-336][344-347]. In the regulatory segment, RBI’s Suvendu Pati presented “MuleHunter.ai,” an AI system deployed across 26 banks that identifies mule-account patterns with accuracy above 80-90 % and is moving toward real-time transaction scoring [398-404][408-410][438-440].


Police officer Ram Ganesh demonstrated a co-pilot that ingests FIRs, generates compliant investigative pathways, automates legal requests and leverages open-source intelligence, having been used in over 467 cases in Maharashtra [465-470][474-480][482-485]. SEBI’s Avneesh Pandey outlined four AI tools-RIDAR for ad compliance, Sudarshan for multimodal fraud detection, Infomerge for data consolidation, and a cyber-resilience framework-showcasing how AI supports proactive regulation and audit automation [524-531][532-537][538-545][546-549]. Shashi Bhushan Shukla summarized the department’s AI journey, noting the scale of taxpayer data (PAN for 80 crore people, 9 crore ITRs, 650 crore SFT fields), the Nudge initiative’s success in eliciting ₹6 540 crore additional income, and plans for real-time, AI-driven pre-filing assistance [563-571][580-587][608-612].


Justice R. Mahadevan concluded that AI has moved from aspirational to operational across tax administration and law enforcement, emphasizing the need for human-in-the-loop safeguards, explainability and ethical governance [617-629]. The symposium therefore underscored AI’s potential to transform compliance, risk assessment and investigative workflows while insisting on responsible, transparent implementation to maintain public trust [35][43][617-629].


Keypoints


Major discussion points


AI as a strategic enabler for tax administration and enforcement – The Chairman highlighted that the upcoming Income Tax Act 2025 will create a “technology-driven ecosystem” where AI reduces interpretative ambiguity, improves tax certainty and supports proactive enforcement through nudges and risk-based analytics [32-35][70-74].


Industry-led AI solutions for taxpayer services (Project Insight 2.0 and the “Blueverse/Bharatverse” platform) – Commissioners and LTI representatives described how AI will deliver end-to-end taxpayer assistance, litigation-risk scoring, multi-agent platforms built on sovereign small-language models, and automated workflow orchestration [88-97][110-118][129-156][162-170].


Broader law-enforcement applications of AI (multimodal analytics, predictive policing, and human-in-the-loop safeguards) – Professor Mausam outlined use-cases ranging from CCTV-based crime reduction and satellite-based surveillance to financial-data anomaly detection, stressing the need for explainability, bias mitigation and civil-liberty protections [210-224][226-236][240-254][291-304].


Regulatory bodies deploying AI at scale (RBI’s “Mule Hunter” and AI governance framework) – The RBI’s chief explained the seven AI-governance sutras, the AI sandbox concept, and the Mule Hunter system that flags suspicious banking patterns with >90 % accuracy, illustrating concrete results and future real-time transaction scoring [369-384][389-410][416-424][428-438].


AI for compliance and cyber-security in financial markets (SEBI initiatives) – SEBI’s executive described tools such as RIDAR for ad-compliance, Sudarshan for multimodal fraud detection, and Infomerge for investigative data integration, emphasizing democratized AI development and continuous monitoring [524-539][543-549].


Overall purpose / goal


The symposium was convened to examine how artificial intelligence can be operationalized across the Income Tax Department and other regulatory agencies to enable easier compliance, reduce disputes, and build trust-based governance ([8], [30-31]). Speakers from government, industry, and academia presented concrete projects, policy frameworks, and technical architectures aimed at turning AI from a conceptual promise into a practical, scalable tool for revenue collection, enforcement efficiency, and citizen-centric services.


Overall tone and its evolution


– The opening remarks set a formal and forward-looking tone, emphasizing the paradigm shift of the new tax law and the strategic importance of AI ([32-35]).


– As industry and academic presenters took the stage, the tone became optimistic and demonstrative, showcasing rapid prototyping, “building agents without code” and tangible performance gains ([54-63], [110-118], [129-156]).


– Professor Mausam introduced a broader, visionary yet cautionary tone, highlighting vast opportunities while warning about bias, over-triggering, and the need for human oversight ([291-304]).


– RBI and SEBI speakers adopted a pragmatic and results-focused tone, reporting concrete accuracy metrics, deployment numbers, and governance safeguards ([389-410], [524-539]).


– The closing remarks returned to a celebratory and collaborative tone, reaffirming AI’s operational status, the collective progress made, and gratitude to all participants ([617-644]).


Overall, the discussion moved from high-level policy framing, through technical showcases, to concrete regulatory deployments, maintaining a consistently constructive tone while interspersing measured cautions about ethics and accountability.


Speakers

Amandeep Dhanoa


– Role/Title: Indian Revenue Service Officer, 2018 batch; Moderator of the symposium


Shri Ravi Agrawal


– Role/Title: Chairman, Central Board of Direct Taxes (CBDT); Chief Executive Officer of the Department of Income Taxes


– Affiliation: Income Tax Department, Government of India [S18]


Abhishek Kumar


– Role/Title: Commissioner of Income Tax, Insights (Project Insight 2.0)


Ramesh Revuru


– Role/Title: Global Head of Engineering, LTI Mindtree [S11]


T. Srinivasan


– Role/Title: Technology Lead, LTI Mindtree [S13]


Professor Mausam


– Role/Title: Professor, AI researcher, founding head of YALI School of AI, India University [S20]


Martin Wilcox


– Role/Title: Senior Vice President, Teradata; Global leader in AI-driven data analytics [S22]


Shashi Bhushan Shukla


– Role/Title: Principal Commissioner, CBDT; Key architect behind Data Analytics Cell and Saksham Nudge Initiative [S23]


Justice R. Mahadevan


– Role/Title: Joint Commissioner of Income Tax; Delivered the vote of thanks


Suvendu Pati


– Role/Title: Chief General Manager & Head of FinTech, Reserve Bank of India (RBI) [S4][S5]


Ram Ganesh


– Role/Title: Cyber security expert and Founder, CyberEye [S6][S7]


Avneesh Pandey


– Role/Title: Executive Director, SEBI; National voice on technology strategy and cybersecurity governance [S21]


Additional speakers:


Shri Shirdi Anand Jha – Principal Chief Commissioner of Income Tax, Delhi (mentioned in the opening remarks)


Harsha Poddar – Indian Police Service (IPS) officer; Award-winning innovator in AI-driven policing (introduced in Category 2)


Full session reportComprehensive analysis and detailed insights

The symposium opened with Amandeep Dhanoa, an Indian Revenue Service officer of the 2018 batch, welcoming the distinguished guests, colleagues and speakers and stating that the Income Tax Department had convened the event to explore how artificial intelligence (AI) can improve governance through more effective compliance and services [1-3]. He introduced the Honourable Chairman of the Central Board of Direct Taxes, Shri Ravi Agrawal, a senior IRS officer who has overseen the department’s digital transformation, including the Central Processing Centre [10-18]. After a brief group-photo arrangement, Dhanoa invited the Chairman to set the tone with his opening remarks [21-24].


Chairman Agrawal linked the symposium theme “AI-driven enforcement for better governance” to the forthcoming Income Tax Act 2025, describing a technology-driven ecosystem that simplifies language, reduces interpretative ambiguity and embeds AI-based algorithms to enhance tax certainty and lower litigation [32-34]. He emphasized that AI amplifies human capability by turning vast data into insights, automating routine work and enabling faster, smarter decisions at scale [35-36]. He outlined the prerequisites for responsible AI deployment-high-quality shareable data, secure systems, clear accountability, strong safeguards and continuous training [43-44] and warned that AI must be driven by humans rather than the reverse, underscoring the need to build capacity in human resources [48-51][52-53]. An anecdote illustrated AI’s speed: with the help of his son, he generated a functional training-module code in five to six hours, a task that would normally take months [54-63]. He concluded by reporting early AI pilots: targeted nudges prompted 1.11 crore taxpayers to file updated returns, generating over ₹8,800 crore in revenue, while prompts on foreign-asset disclosures led 1.57 L taxpayers to reveal ₹99,000 cr in assets, yielding an additional ₹6,540 cr tax, and 6.96 L taxpayers withdrew bogus deductions, adding ₹1,758 cr [78-80].


Category 1 – Industry & Academia


* Abhishek Kumar (Commissioner, Income Tax – Project Insight 2.0) presented AI-enabled compliance objectives, including NERJ campaigns, litigation-risk assessment, LLM-based issue tagging and case-vulnerability prediction [95-98].


* Ramesh Revuru (Global Head of Engineering, LTI Mindtree) launched the “Bharatverse” (Indianised Blueverse) agentic platform, explained its five-layer architecture, introduced the “right-action” concept for deterministic outputs and cited eight patents on AGI [99-103].


* T. Srinivasan (Technology Lead, LTI Mindtree) detailed the technical architecture of a sovereign LLM (SLM) for tax, describing LoRA-based low-cost adaptation, vector-DB/RAD-plus retrieval, an ontology-driven knowledge graph, multilingual support and deterministic chat-bots [104-108].


* Prof. Mausam (Founding Head, YALI School of AI) offered a broad view of AI in law-enforcement, covering data modalities (structured, visual, speech, language) and illustrative use-cases such as CCTV-based crime reduction, satellite-imagery for maritime surveillance, anomaly detection in taxi behaviour and financial-crime graph analysis; she stressed critical safeguards-human-in-the-loop, bias mitigation, data centralisation and protection of civil liberties [109-115].


* Martin Wilcox (SVP, Teradata) argued for in-warehouse graph analytics and multimodal AI, highlighted “Bring-Your-Own-Model” capability and cited case studies: a Brazilian credit-union income-estimation model delivering 25× faster inference and an Asian bank’s NPS-driven chat-analysis [116-120].


Category 2 – Regulatory & Enforcement


* Suvendu Pati (Chief General Manager, RBI – FinTech) outlined RBI’s AI governance “seven sutras” and six pillars, described the AI sandbox policy and introduced the “MuleHunter.ai” system-featuring 857 variables, bank-specific relevance, accuracy above 80-90 % and real-time transaction scoring with cross-bank aggregation [121-126].


* Ram Ganesh (Founder, CyberEye) demonstrated an AI “co-pilot” for police investigations that ingests FIRs, generates SOP-compliant workflows, drafts legal requests, integrates telecom and forensic data, leverages open-source intelligence and combines four AI technologies (LLMs, graph-NNs, agentic AI, big-data analytics) [127-132].


* Avneesh Pandey (Executive Director, SEBI) described SEBI’s AI suite: RIDAR for ad-compliance monitoring of mutual-fund advertisements, Sudarshan for multimodal, multilingual fraud detection, Infomerge for investigation data-integration and report generation, and a cyber-resilience framework employing ensemble-model validation [133-138].


* Shashi Bhushan Shukla (Principal Commissioner, CBDT) traced the Income-Tax AI journey from 2004 to 2024, presented the Saksham Nudge 7-step strategy (data → analysis → action → communication → hand-holding → enablement), and shared outcomes: 1.57 L taxpayers disclosed ₹99,000 cr foreign assets, yielding ₹6,540 cr extra tax; 6.96 L taxpayers withdrew bogus deductions, adding ₹1,758 cr tax; he also announced the International AI-misuse consortium targeting synthetic identities, deep-fakes and AI-assisted fraud [139-145].


Vote-of-Thanks – Justice R. Mahadevan


Justice R. Mahadevan concluded with a concise recap, presented as bullet points:


– Opening remarks set the vision of AI-driven enforcement for better governance.


– Category 1 highlighted industry innovations: AI-enabled compliance, sovereign LLMs, agentic platforms, multimodal analytics and graph-based insights.


– Category 2 showcased regulatory frameworks, AI-assisted investigations, market surveillance tools and large-scale nudge-based outcomes.


– All speakers underscored responsible, human-centric AI deployment and the transition from aspirational concepts to operational systems.


He thanked the organizers, speakers and participants for their contributions [146-150].


Session transcriptComplete transcript of the session
Amandeep Dhanoa

Thank you. Thank you. Thank you. Thank you. you you Thank you. Thank you. Thank you. Dear guests, colleagues, and esteemed speakers, namaskar. I, Amandeep Dhanua, Indian Revenue Service Officer of 2018 batch, welcome you all to this symposium by the Income Tax Department on AI -driven enforcement for better governance through effective compliance and services. Artificial intelligence today is reshaping every domain of governance. And when it comes to public services, the state… are uniquely high. Understanding these stakes, the Income Tax Department has called upon distinguished speakers from the industry, the academia, and regulatory bodies to delve on the most pertinent question of the hour, that is, how can artificial intelligence enable easier compliance, lower disputes, and strengthen trust -based governance?

Today’s sessions are structured deliberately into two categories, Category 1 of Industry and Academia and Category 2 of Regulatory Bodies. With that, I would like to introduce Honourable Chairman, Central Board of Direct Taxes, Shri Ravi Agarwal. Sir is a Distinguished Indian Revenue Service Officer of 1988 batch who brings over three decades of experience in the field of Income Tax Department. He is the Chief Executive Officer of the Department of Income Taxes. He is the Chief Executive Officer of the Department of Income Taxes. He is the Chief Executive Officer of the Department of Income Taxes. He is the Chief Executive Officer of the Department of Income Taxes. He is the Chief Executive Officer of the Department of Income Taxes. He is the Chief Executive Officer of the Department of Income Taxes.

across multiple verticals of the Income Tax Department. He has played a pivotal role in key phases of the department’s digital transformation, including the establishment of Central Processing Centre. Known for his strong digital mindset and technocratic approach, he has consistently encouraged the use of data and technology to strengthen administration, administration, enhance compliance and translate data into revenue through prudent approach. Now, I request Principal Chief Commissioner of Income Tax, Delhi, Shirdi Anand Jha Sir, to kindly welcome Honourable Chairman Sir with a plant. Thank you, sir. I request all the speakers to kindly come on to this side of the stage so that we may have a group photo I request chairman sir as well as the member madams to join for the group photo all the speakers from category 1 and category 2 please join us for a group photo thank you Thank you, madams and sirs.

I request the speakers from Category 1 to kindly take their place on the stage, please. I request Abhishek Kumar, sir, Ramesh Ravuru, sir, T. Srinivasan, Professor Mossam and Shri Martin Wilcox to take the seats, please. Now I request Honourable Chairman, sir, to kindly set the tone for this symposium by his opening remarks.

Shri Ravi Agrawal

Good evening, ladies and gentlemen. Good evening, gentlemen. Well, I’m delighted to welcome you all to today’s symposium, which is under the aegis of AI Impact Summit, Hitae Sarvajan Sukhai, Welfare for All, Happiness for All, which speaks the theme. In fact, it’s a very powerful theme. How do you use AI for welfare of all and happiness for all? That’s the basic intent. And within it, the sessions today would be AI driven enforcement in the summit. And it is a privilege to join a conversation that brings together policymakers, technologists, enforcement agencies, and academia on a subject that will shape the future of governance. Income Tax Administration is at a critical inflection point. especially with the enactment of the new income tax act 2025 along with the corresponding rules forms procedures which would be effective from 1st of April 26 it represents a paradigm shift in the philosophy and procedures practices of the direct tax administration in India and what makes it different is that going forward it is going to be a technology driven ecosystem that would be put in place and that’s how and that’s why the role of AI becomes so important and this gathering today becomes all the more relevant now the new income tax act while simplifying the language and procedures reduces interpretation ambiguity and brings tax certainty and as I mentioned since the beginning of the year it is going to be more rule driven, technology driven ecosystem.

The changes in the act, the language in the act would help in also putting in place the algorithms which through AI going forward would reduce and minimize the scope for different interpretations. The positive environment created by the Income Tax Act 2025 which is reflected from the feedback that we have received from the stakeholders and the prudent approach of tax administration that we are providing since a few years, since last about few years, provides a robust foundation for sustaining and advancing future reform measures to reduce litigation, enhance tax certainty and trust based voluntary compliance. AI has the potential to transform every sector by amplifying human capability, by turning vast data into insights, automating mundane and routine work, and enabling faster, smarter decisions at scale.

For law enforcement, this means we can strengthen how we prevent, detect, and respond, but only if we build the right preparedness and capacity building through high -quality shareable data, secure systems, clear accountability, strong safeguards, and continuous training. And here, the basic theme anchored by the Honorable Prime Minister in a manner of vision becomes so important. Because ultimately, what does Manav reflect? Moral and ethical systems, accountable governance, national sovereignty, and the right to justice. accessible and inclusive AI and valid and legitimate systems. So what do these words reflect ultimately? These reflect that while we have in AI a very powerful tool, but at the same time we need to be conscious about it as to how do we put in place and apply AI in our overall governance, overall welfare of people, happiness of people while being ethical.

Also conscious of the fact that if not applied with the responsibility, the results can be, you see, different. So we intend to adopt AI to support enforcement with clear accountability, build on secure and sovereign data foundations, ensure access to phased adoption and continuous training and validate systems for fairness and lawful… use. We have AI and it’s faster, you see, developing. Within the income tax department, even across, what we need to see is how do we build our capacity, our resources. Because here is a solution. You have some AI tools, solutions. But for that, you need to drive. The human has to drive the AI rather than AI driving the human. And for that to happen, you have to build that capacity in your resources, in the human resources.

We need to be conscious of the fact that, okay, what are the pluses and the pitfalls. In adopting AI, we have to be conscious about it when we are adopting AI. One of the features, I would just like to share one experience that I had just yesterday. So I was told that through AI, you can actually develop some codes. I didn’t know about that. So yesterday I asked my son, well, how is it possible? And I was proposing to develop an app for our training purposes. So he told me that, OK, this is how we can go about it. This is the open source and so on, so forth. And I put in place some sort of framework for the technology for this training module.

Spent about five, six hours in the night. And what was interesting was within five, six hours, one one actually was able to get a reasonably robust and matured code and a full application, which broadly takes care of the requirements of capturing training in the department. OK. Now, why I mentioned. example well otherwise but for this I would say facility a development of this code would have taken months but then with a you see with spending five to six hours one was able to come up with some code even if I say it is elementary it is basic you have a platform on which you can build on so that is the power of AI but can I blindly actually rely on it the answer is no I have to apply myself and see to it that okay you already have this platform how do I build it up and that is the potential of AI that it actually would help us to not do routine and mundane work it would translate our effort from a routine work to a enhanced work and that is where our capacity and our matured would lie.

So this is an opportunity for us in the tax department because we are all here in that context but also as individuals that we leverage on the power of AI but we leverage it being conscious of the fact that we have to drive it rather than AI driving us. So our approach needs to be practical with use of proven applications for data integration risk and priority scoring, anomaly detection, language support and workflow automation with constant testing and learning so we stay aligned with AI advancements and do not fall behind. This is also very important because things are developing and when we talk about Developed India 2047 how do we actually keep pace with it? So you have to actually align yourself with the developments that are taking place.

And each of the organizations, be it in the government or outside, have to align together so that together as a nation we grow and you put these opportunities to practice and provide to our taxpayers and stakeholders, you see, the best of class ecosystem and facilities. Over the past two financial years, we have applied AI in the department, though to a limited extent, but then it has yielded results. As you would all be aware, targeted nudges have led to 1 .11 crore taxpayers filing updated returns with a revenue impact of more than 8 ,800 crores. And if you talk about foreign assets, then the foreign assets, it’s worth about 99 ,000 crores. and foreign income of about 6 ,500 crores has also been declared by the taxpayers on the basis of the prompts that have been given by the tax department.

So we are moving from intent to action. We are scaling AI -based risk assessment, strengthening digital forensics and analytics, and building AI support for taxpayer services. To make compliance easier and enforcement more precise. The discussions at the summit will help us refine our approaches, set clear government standards, and scale that works to improve speed, consistency, and fairness in enforcement. I wish all the best, and I am sure that the deliberations here would be really useful and enriching. Thank you.

Amandeep Dhanoa

Thank you. Thank you, sir, for setting the tone and direction so clearly. Now, as we begin with category one, we turn to industry and academia, the two ecosystems that are shaping the intellectual and technological foundations of the artificial intelligence. While government defines purpose and safeguards, it is the industry that builds scalable systems and academia that pushes the frontiers of responsible and explainable AI. This segment will help us to understand not only what is technologically possible, but also what is practical, scalable and sustainable for public administration and law enforcement. Now we move to session one, project insight 2 .0. Where AI enabled compliance and taxpayer services are being operationalized at scale. I call upon Shri Abhishek Kumar, sir, Commissioner of Income Tax.

insights who has been instrumental in shaping the income tax department’s digital ecosystem through project insight and other initiatives. Joining him are Shri Ramesh Revuru, Global Head of Engineering at LTI Mindtree and Shri Shri Nivasan T, Technology Lead at LTI Mindtree, bringing three decades of enterprise technology leadership. May I invite all the three speakers to take us to the next phase of AI -enabled compliance. I request the speakers to be mindful of

Abhishek Kumar

Now, coming to last step, how does it help taxpayers in end -to -end life cycle? So, first key step is quick availability of accurate information to the taxpayers. We already discussed it as part of AI, yes, it will be enabled. Next, our NERJ campaigns will become more effective through infusion of AI. A very small fraction of cases where it leads to litigation, we will be able to do litigation risk assessment through AI infusion. With the advent of LLM, it is possible to tag issues in the assessment orders, appellate orders, judicial orders. So, we will be able to tag issues. We will be able to… We will be able to link judicial orders. and as a next step we will be able to even predict vulnerability of the case and ultimately it will result in retraction in litigation.

So all these business objectives we seek to achieve through Insight 2 .0 especially through infusion of AI. These are the business objectives and how they will be achieved, what is the technology proposed and how technical implementation will take place. That will be explained by Mr. Ramesh from LTM. Thank you.

Ramesh Revuru

Ma ‘am I got the message. I’ll make Maggie and finish in two minutes. Thank you very much sir. Thanks for the opportunity to be here in the August presence of all the income tax officials. I want to leave you with three key messages. The first and foremost is the launch of Bharat Varsh. Thank you, ma ‘am. Bharat Varsh. Second, I’ll talk about the importance of right action. And the last part of it is general intelligence in the context of CBDT. So, the first one, we at LTM, IIT, LTM now have this product offering called Blueverse. Blueverse is the agentic platform on which you can build your agents. Sir, Chairman Sir spoke about how he was able to build these agents without writing code.

And then think of it as this is the platform on which you can build all the five layers that are required for any multi -agent. The five layers are the foundational models, the LLMs, the data layer, the knowledge layer, the orchestration layer. and the consumption layer on the top. All these layers are pre -built and hence the ability of CBD to build their multi -agentic system faster is what we bring and this is what we have implemented for our global customers. What we are launching is the Indianized version of Blueverse, what we are calling Bharatverse and hence purpose built for CBDT. Why is right action important? Right action, as you might know, generative AI is probabilistic in nature.

It is going to guess the next word or generate the next word or the next pixel, the next frame in the video. But in the context of CBDT, you cannot have something which is probabilistic. You need to move the needle to become more deterministic. And hence our ability to guarantee that right action in every condition, scenario, criteria is what right action is all about. I was in the morning listening to Demis Hassabis, who is the Google deep mind, and he said AGI probably is five years away. While AGI, which is general intelligence, human -like intelligence, is five years away. What you need is general intelligence of the CBDT. And hence, right data, right context leading to that right action.

We have filed eight patents on creating this AGI, and AGI will be bundled with our Bharatverse that will get implemented for CBDT. I’ll ask Srinivasan in the interest of time to take us through the technical architecture, but a big thank you for the opportunity.

T. Srinivasan

Thank you, Abhishek sir, and thank you, Demis. So. Quickly, like, you know, I’ll start it very fast. So the most important thing is that. you know while he spoke about right action everything how do I do it so everybody talks about LLM it’s not about deploying a simple LLM like what we are doing here what we are doing is we are building something called as your SLMs which is just long small language model along with the regular LLM which will be using for the particular system as such okay so if you see this the purpose of this SLM is going to be it is going to be very very much income tax based it is going to be for the ITD officials and it is for the going to be for the CBDT and we are going to ingest them with data which is going to be your income tax loss your information which is very closely related to the environment which means that there’s going to be a data control you’re going to have a quality vetted data everything is going to be within the system it is going to be secure nothing is going to go outside at all it is going to be what I call as a sovereign LLM for this system as such okay so when you look at it how am I going to to do that.

So I cannot be retraining the entire LLM fully. It’s not cost effective. So we use the concept called as LORA, which is your, LORA is nothing but your low ranking adaptation, where you can just spend, you can do it at 1 to 2 % of the training, overall training or cost, we can do that. So what it does is that it takes all the data. It does not, it does nothing for us. Every LLM has got a deep package. It removes that. It does just the matrices, add the matrices which is related to this particular data and it starts training on that. So what happens is that you get the proper details. Now what do I do is that still I need to clean it up.

So I use RAD plus vector DB to make sure that all the details which is there related to the retrieval and source citation which you are going to get, that is going to be given to you. Then I am going to distill it. Imagine this as a teacher, this as a student. it will have the lower version of it or specialized version, I will call it Generate and Masters. So for certain set of things you will use this. And like now quantization basically improves because there is going to be nation scale. So I want to make sure that we are very effective and efficient. So we will be using Indate. And the last and the most important thing is your ontology.

What you are going to do is that I am going to look at your structure, data structure. Then we are going to look at the sections, the precedents, then entire entities and the compliance rules. Everything is going to go inside this model. So completely we are building it for you. What actually happens is that because of this you will be able to summarize, you will be able to have multi language capability. Then you also have the, it is not your typical generic NLM. It is going to be a very focused area where it is going to do for any kind of task which has got legal intelligence, legal interpretation intelligence. And it will work on that.

So advantage is that, sales code, context and analytics, and the other things that we are going to have to do is that we are going to have a multi language capability. Directly being able to use it. Let me take next two minutes. What am I going to do? So how am I going to do this? So I am just going to take you two journeys. We are going to do 25 or 30 of them, but I don’t know. I will just have two of them for you, to show you a sample. So the first thing is about your AI. The first time, the most important thing is that we do not want to frustrate the people because they are getting the data from external sources.

We are getting external sources data, and we end up, if there is issues in that, it’s a problem. So what we are doing is that we validate them at the data source level. So we are going to have a proper agentic AI, which does that. Then, the grievances. When you want to talk about it, currently it is going to be FAQs probably, and it is going to be a bunch of, you know, chat, probably, you know, deterministic chat is the word I use. Now, I am going to make it more, what I call as truly context -amid and intent -driven chatbots, or conversational AI, which will kind of enable them, it will take them through the journey for them, to say how they should do it.

Now the last, not the least, like you know even then, pre -filled data is very powerful. But if that data is not proper, get into trouble. So what we are doing is that, there also we are putting in the internet. So internet is continuously there. Now this is going to reduce the overall submission by the taxpayer as such, okay. So and next what we do is that we have a proper, we are continuing the intelligence here. So we do a verification. So we will be able to auto detect and match the discrepancies. We are able to match the data saying that where they have gone wrong. Rather than telling them what is done right or wrong, we will help them to do that.

Then once that is done, this will actually go through that. And not everybody is intentionally doing it. I am sorry I am not using the mic. Please, I think I am loud enough to talk to everybody. Everybody is able to hear me. So I do not, I have a nutshell. So this agent, primarily what it does is that, it makes sure that compliance is intentional. We make it, the template is also very very human, human based. Like human based and makes sure that you have it. So here I am going to show you how to do it. So here I am going to show you how to do it. So here I am going to show you how to do it.

So here I am going to show you how to do it. the end the last not the least we have people who still will do it we have problems so what we do is that we identify agents we look at the cases every other detail and make sure that the vulnerability is predicted so it is going for this the entire thing if you see that the flm which what is being created is going to be your primary what i call the input for you to do this okay let me go to the next one this is going to be general though i put it as the air so when i talk about conversational assistant it is for anybody and everybody most importantly for anyone to look at the portals understand where it is and then go behind them is a problem so what am i going to do is that i’m going to have a context aware intelligent uh domain aware uh nlp chatbot which understands and explains them what i call the idiot proof which is like a common man he should not get worried about the legal jargons rather tell him how or what he should do step by step that is one of the primary focus for me and we are going to use certain set of LAMA, we are going to use the SLMs and also the SLM which is your inbuilt SLM which is going to be built across.

So overall if you really ask me, Insight 2 .0 will be moving away from, it is more about enabling intelligence to the both to the officers and to the citizens and making them more happy and you know as

Amandeep Dhanoa

Thank you sirs. We are already running behind by 20 minutes so I request the further speakers to kindly speed up. Now Professor Mohsen, founding head of YALI school of AI at India University of India. Professor Mohsen, founding head of YALI school of AI at India University of India. Thank you. and one of India’s foremost AI thought leaders will share perspectives on the possible usage of AI by law enforcement agencies and the road ahead. Sir, the floor is yours.

Professor Mausam

Thank you for the kind introduction. I was asked to speak on the usage of AI by law enforcement agencies in general and not just on income tax. So I will talk about a little bit more general perspective. I should say that I have been fortunate to be involved with some of the earlier activities in the income tax department and I personally feel that the kind of support that the income tax department in India gives to our users is much better than the support that the US gives their users. I have seen that because I have filed taxes in both countries and I feel that there is no equivalent of the 26AS form that the US gives to our users.

There is just so much support that we have here. we can check how the work is going forward. Also, I am not a law enforcement expert, so I got some feedback from Shankar Jaiswal, who is DGP Lakshadip and Sunny Manchanda, who is director of DRDO Young Scientist Lab. So thank you to them. So to me, this is the context. Per 100 ,000 people, the number of police officers in India are much less than in developed countries like, and I will count China now as a developed country, in China, US and Germany. We are at 155, they are at 200, 300 and so on. For judges per million people, it is recommended that we should have 50. We have about 15 to 22, depending upon which news article you read.

There are about 29 % police cases which are still pending investigation and about 4 .85 court cases pending for over one year. With that kind of a sentiment, and I don’t know how it is for the income tax, but we are always in use of high expertise. And therefore, the need to use AI in India, Because if we need to deal with this setup that we are in, we need to somehow augment ourselves with technology. Now, of course, you can use AI in various ways and the aspects to think about it. Are you using in law enforcement before a crime is committed? Are you using it to predict crime? Are you using it to figure out what we should do to stop crime?

Or when the crime has been committed, are we thinking about how to investigate crime and how to make judgments for it? And depending upon where we are, all of these places AI can be used. Similarly, AI can be used not only by income tax and GST, but also by military, by maritime, by traffic police, by other police, and so on and so forth. And also, what kind of data are we getting in? So for most of this conversation, we are going to be talking about financial data, which is structured data. But there’s also visual data, language data, speech data. And bringing it together adds up to a lot of information. to intelligence so you can now take one from the first column one from the second column and one for the third column and actually create new AI use cases so for example we can do much better job of monitoring in traffic police if we somehow use the visual data in some ways and so on so forth so you can actually start thinking about really really interesting possibilities here and in the next few slides I’m just going to show you very basic some examples so for example in the case of image of video in 2014 we were very proud that surat used to say that 27 percent crime has been reduced just because there were CCTVs and there was face recognition and if three and three to four people in a known database came together police would go there and it would reduce the crime somehow we haven’t seen that happening in India elsewhere other than so that I don’t know why this is really old so we are really poised to do this we should really have CCTVs everywhere so that we can do a much better job of crime surveillance.

DRDO lab is doing a very interesting job of obfuscation. If you are wearing a mask, if you have an interesting, you know, hairline has changed, can we figure out who you are? Of course, visual intelligence for traffic should be very easy. We still see people coming in the opposite side of the road or not using helmet. That can be absolutely automated with very simple imagery. A satellite imagery analysis is also very interesting. For example, the only way we know that China has a new port in Djibouti is because of the satellite images. It’s very easy to actually, I mean, it’s not easy to analyze, but we can analyze the data is there. When did we say the data is there for all the world so easily accessible to us?

Well, today it is. And for income tax, by the way, you can also start thinking about where a person is living and what kind of locality it is and how does it light up in the night. And that tells you the affluence level and you can start using that kind of information. This was a very interesting case where U .S. Marine Coast Guard, a Marine ship carrier. A flight carrier was being chased by 20 Iranian vessels in the ocean. And we knew because the satellites can see. Same for maritime surveillance. We have so many use cases of AI today, such as DigiYatra. We can use face recognition for searching of the missing persons. But we can start thinking about anomalous behavior, like anomalous vehicular behavior.

One of the very famous car rapes that happened in Delhi happened that with one car was just going on the road, taking U -turn, going on the road, taking U -turn, going on the road. These kinds of very anomalous behavior should have easily been detected if we were doing this. Even today in taxi safety, I know that women are still worried about taking an Uber late at night in Delhi, even though we have a panic button because we don’t know whether the panic button really works or it doesn’t work. But if there’s any anomalous behavior of the taxi driver, there should be. There’s a very clear mechanism to say that the driver is not following what they’re supposed to be following.

And it should be very easy to prevent. such crime. Same for taxi, bus, train safety monitoring. We go on to textual intelligence. A lot of people are interested in anti -terrorism and so on and so forth. Can we easily and quickly answer questions? For example, if I if we just kill Osama bin Laden and we gave you the laptop of Osama bin Laden, how much time would it take us to actually go through all his documents? I hope not much because AI should be there to help you. For example, I was working on this long time back and at the time we could figure out who are the entities active in Iraq. This is what my system gave 15 years ago that these are all the players who were active in Iraq at the time and if you want to know something about one particular player, you could just say what do we know about this person and it will give you a quick answer, a quick summary.

This kind of intelligence now we have at our fingertips and it should be used for figuring out who are the bad actors whether in the income tax or in other kind of law enforcement. I can move on. Speech to text for quick FIR filing, a chatbot to support … We just heard about Project Insight 2 where there will be a chatbot, but we also need a chatbot for our own income tax department people because they need to somehow deal and work with the data. And if they have to be writing code every time, it would be very hard, as Mr. Agarwal said earlier, AI can write code now. So we should have text -to -coding systems for our own IT department so that they can easily get to the data that they are looking for and that makes it convenient for them to find the right information.

On the more financial intelligence side, the input here would be more structured data, for example. So there are so many interesting news stories that are coming out, which makes me very proud that we have a very, very active department. For example, it is just like yesterday or day before yesterday that 60 terabytes of billing data and 1 .77 lakh restaurant IDs were uncovered and 70 ,000 crore tax evasion scam was uncovered. This was just using the information and some crunching by the data by AI that these people were deleting a lot of invoices. And once the system recognized that there was a pattern going on, you figured out that this is an anomalous pattern. And then an income tax person, an analyst can actually look at it and figure out what is going on.

Similar things have happened where large or frequent bank deposits can flag mismatches, ITR filing, AI tracking suspicious tax claims. So it seems it is very clear our citizens know that there is AI that is looking at them, but there are still scams going on. And if you can detect anomalous behavior, this is not the expected behavior. We might be able to find even new scams that we couldn’t guess ahead of time. And if we start to put this information together, it becomes even more interesting. For example, mule accounts. I have been told that college students have mule accounts. But if we know from their Facebook. sorry, I’m too old, Insta pages or whatever the more recent social media is, that they are just college students, but their bank accounts are going through a lot of churn, it would be very easy for us to predict that maybe it’s a mule account.

Similar, other cases of tax evasions and money laundering could come together if multiple sources of data from the social media feed, from the financial document, from the employment feed, from investment, from the various kinds of purchases they do. Sometimes these will be invoices. If these are paper invoices, maybe an OCR would be needed. So there will be a vision requirement. There are also interesting collusion rings. Generally, we have found that people who are bad actors, they support each other in the bad acting. And so if you create a graph around it and start looking at collusion rings, we might be able to find these people better. I think the sky is the limit. There’s another one very interesting phenomenon that where crime happens more, that is where we deploy more people.

But once we do that, people who are doing the crime figure out. that oh we have more people here they go elsewhere this is a game this is a game between attacker and defender and if the attacker knows what the defender is doing it is very easy for attacker to change the the place or whatever it is that they are doing their style and continue their game so it is important to not just go where the crime is but go where the crime might be in the future once they get to know that the defenders are going there yeah i’m gonna finish in two minutes so people have studied these security games for example in elephant poaching scenarios in coastal patrol scenarios and we have found much much better performance because of that let me take the last minute and say what are the challenges in the use of ai in these scenarios because there’s a lot of opportunities it’s a lot of exciting a lot of a lot of excitement but we have to be careful first of all we cannot make this autonomous.

If AI starts to reach out to citizens directly and starts to make mistakes, it will create a lot of problem. People will be unhappy. People will be worried. We will lose the trust in the system. It is important that our intelligence analysts and AI, they work together. AI brings up the issues. AI maybe generates a lead, but the lead is processed by the human to maintain the trust in the system. Now, that is where risk assessment becomes very important. Over -triggering is also a problem because if you make lots of alerts, then people become immune to the alerts and we have to figure out how we can do this in a trustful fashion. If you don’t do this right, you get into the bias of the algorithms and it has been shown that earlier when in judicial settings AI was tried, it made mistakes in favor of white people and opposed to African -American people.

We are a very diverse society. with so many castes and social strata. If we got those biases in our models, that will be very, very devastating. So we have to be very mindful that the AI bias doesn’t creep in and we have a human in the loop. Also, it is important that we gather data in a centralized repository. We are used to a system where one hand of the government doesn’t talk to the other hand of the government. Project Insight is trying to fix that. I’m so happy. Other scenarios are also trying to fix that. We should make sure that the data comes together so that intelligence comes out of it. The inter -jurisdictional boundaries don’t come into the middle.

The other thing is that our defenders, our IT personnel, the analysts, the law enforcement agencies, they need to be smarter than the attackers. They need to be smarter than the attacker because the attacker will be always creative in figuring out the next attack. And if we have not been thinking ahead of time, we will be missing out. Finally, we have to make sure with all this new data that comes in, there is obviously… increased scrutiny, there is increased surveillance, and it doesn’t hinge on civil liberties and privacy of personnel. So these are the things that we have to be mindful of, but I think the sky is the limit, and I’m really happy that we are doing this, and it’s being used more

Amandeep Dhanoa

Thank you, sir, for such insightful perspective. Now, Mr. Martin Wilcox, Senior Vice President at TerraData and a global leader in AI -driven data analytics, will speak on AI -driven risk analytics of financial data for the law enforcement agencies. So please.

Martin Wilcox

and understanding the networks of bad actors. But to build these sort of graphs at India scale is incredibly complicated because graph analytics is an O -N squared problem. And so we need, again, scalable and performance systems and to bring the complex graph algorithms to the data in the data warehouse instead of trying to copy samples of data out of the data warehouse. If we have to cut the graph by taking small samples of data out of the data warehouse, then the risk is we miss the bad actors that we’re trying to catch. I want to talk a little bit now about next -generation AI use cases. And at Teradata, when we speak of next -generation AI use cases, we’re typically looking for four characteristics.

And we won’t go through all of those four characteristics today in the interest of time, but as a couple of the previous speakers have mentioned, one of the defining characteristics of a lot of next -generation AI use cases is this idea of multimodal data. This idea that images, audio, and text… And we can leverage those data in the kinds of ways that previously we’ve only been able to leverage structured transaction and event data. Actually, and I’ll come back and talk about that specifically in a moment or two. But this is another example that I thought might be interesting to some of you. This is an example from Brazil’s largest credit union, which is a company called Secredi.

And the challenge for this particular organization is in Brazil there is a large unbanked population that’s outside the formal economy. And obviously it’s very difficult to make credit risk decisions and lending decisions if you have a group of people that can’t prove their income. So the solution for Secredi is a sophisticated set of income estimation models. And they use those models to predict an individual’s likely income. And then they make credit lending decisions on the basis of that predicted income. Now this is a model that was trained outside of the data. database. And we have a technology called Bring Your Own Model, which enables us to consume models regardless of where they’ve been trained. So if you can train a model in PMML, in Mojo, or in ONIX, we can import that model, and then we can use Teradata as a parallel harness to speed up the training of this model.

And I think this is incredibly important, because we’re at a moment now in the industry where everybody wants to talk about model training. Because model training is exciting, and model training is cool. But actually, we don’t make any money when we train a model. We only make money when we can deploy that model to production and run inference, and in this case, inference at India scale, to actually change the way we do business. And this Bring Your Own Model technology that enables us to import models regardless of where they’ve been trained, so your data scientists can use the tools that make them the most productive, but you still have… We have a mission -critical platform that enables you to score models in production.

we get very significant speed up when we use this technology. From the numbers on this slide you’ll see that in this particular case for the income estimation models in Brazil we were able to run inference 25 times faster on the parallel data warehouse by bringing the complex processing to the data instead of the other way around. And 25 times faster is the difference between running this model once per day and running this model once per hour. And if you run the model once per hour you can change your entire business model you can change the cost of credit during the working day. Now this next example is an example of that multimodal phenomenon that we were talking about.

This is again another large Asian bank. This bank cares a lot about NPS, about Net Promoter Score. They consider that Net Promoter Score is the single most important leading indicator of customer intent. Whether the customer will leave or whether the customer will stay and consume more products. The problem this bank has is it has very little understanding of the drivers of Net Promoter Score. to score. But when we were working with them, we were able to establish that they were capturing 50 ,000 customer chats per week from the online banking application.

Amandeep Dhanoa

Thank you. Thank you. Thank you. Thank you. Thank you. to all our speakers of the first category. Thank you. Thank you. Thank you. Thank you. So as we move to category 2, we now shift from perspective to practice. Across India, regulatory and enforcement agencies have increasingly embedded artificial intelligence into their core systems. This segment brings together agencies that are not just exploring AI but actively deploying it to strengthen compliance, improve oversight and enhance citizen -centric services. For this, we have among us Shri Suvendu Pati from RBI, Shri Harsh Poddar, an IPS officer, Shri Ram Ganesh from CyberEye, Shri Amnesh Pandey from SEBI, and Shri Shashi Bhushan Shukla sir from Thank you. Thank you. Thank you. All the sessions being so interesting that I see most of the audience sticking to their seats.

So for the first session in this category, I introduce Siri Sovendu Pati. Sir is the Chief General Manager and Head of FinTech at the Reserve Bank of India. Sir will present Mule Hunter, an AI -driven initiative targeting mule accounts. Sir, please.

Suvendu Pati

So good evening, everyone, and thank you for the opportunity for having me. I would say that I would spend some time on what initiatives we have taken and then come to the mule hunter. First of all, recognizing the need of the governance and the financial sector has been one of the early adopters of artificial intelligence, given that most of the decisions are based on data. RBI had constituted a committee and it submitted its report last year in August and it has been placed on our website. It had recommended seven sutras or high level design principles. And there are 26 recommendations, which are 13 based on the innovation, enablement, as well as 30. you know exactly on risk mitigation.

And together with these sutras, there are six pillars under which these recommendations are classified. And these have, I would say that these seven sutras, I would come to the next slide, these have been adopted by the, this you can have a look. So I am happy to report that these seven sutras which initially we started as a recommendation or guiding principles for the financial sector has now been adopted by the government of India as the India’s design or India’s principles or sutras for AI governance across all sectors. So this is something, and on the right side, on the left side, you can see the recommendations of our RBI committee. And on the right side, you can see the recommendations which are published by the government of India on November 5th, outlining those very principles.

And so one of the foundational principles that we are talking about. is trust in the system. Any technology, it doesn’t matter how powerful it is, it will never be adopted unless it announces trust. So people should feel comfortable by the technology. And there are other, and we have another principle which cuts across every application is putting people first. You know, customers, people, citizens, they need to be protected at all times. And if in high -risk areas, high -risk decisions, one should talk about, you know, human in the loop and things like that. Other thing that we have also recommended or talked about is innovation over restraint. And in that, unless we, you know, experiment with this new technology or this technology, we would never realize the potential.

So there is a lot of apprehension in people’s mind that it is a probabilistic model, non -deterministic model, there may be mistakes. But unless we… still you know experiment do sandbox testing and do those kind of experiments we will never realize the true potential of this technology so there was a little nudges provided to the institutions do experiment do adopt there are other you know principles I would in the interest of time I would not talk about them so there are specific recommendations sorry and these are the some of the recommendations which are available on our website in the report if time permits you can you know go through at your leisure so this one of those recommendations is talking about bringing up something called the AI sandbox one of the critical elements why in India we need to do this is because entities face constraints on account of available of compute power infrastructure and also there are constraints with regard to availability of data so this is something that we recognize and as a public good we would enable AI sandbox by making cross sectoral data available and cross institutional data available in an anonymized way, which can be used by the entities and model developers.

And some of those things, capacity building, AI liability framework, another important element, how the customer needs to be protected. So, moving on with the other principles, there are risk mitigations, how the board policy should be formed, product approval process, cyber security measures, red teaming exercises. So, there are a host of, again, balancing 13 recommendations on the risk mitigation. Now, let me turn to the application that we are talking about today. This is one initiative, Mule Accounts in our banking system is a resource. It’s a real challenge. And given the number of, the huge volume of data that we have. We would never, humanly it is not possible to do it without the use of technology or machines.

So we have developed a MuleHunter .ai application, which is now implemented across 26 banks. Another three banks are in the process of implementing. And it has a lot of, you know, this, it has developed 857 features, which have been identified so far. And this is, you know, getting better and better as the model is getting trained across institutions. So out of these 857 features, let’s say for a bank like State Bank of India, only 50 features may be very critical. Whereas for other banks, like say RBL Bank or Indusint, another set of 50 features would be important. So this itself is providing insights. And based on our analysis, our understanding. and it is relatively in the progressing stage of implementation, these are the ways it is getting implemented over a period of time and currently it is deployed on -prem within each bank.

So the data really doesn’t go out of the bank themselves. But there is a central aggregation service that we are running which would take the intelligence from the features to the central aggregation model. So what we have identified, there are those insights which are predicted and we are rule -based engines, the banks which were implementing so far, they were giving 20 to 30 % level of accuracy. But this mule hunter or AI -based models, the accuracy level has significantly gone up in some institutions above 90, in some institutions above 80 and so on and so forth. And so on and so forth. As somebody said, the rule -based systems, they are handicapped. They were a handicap that human element would be required to analyze a large volume of data.

But here this number is getting reduced. So what, for example, if we have found out that there are patterns like around midnight when the customer support is not there, a lot of mule transactions take place. This is a new feature which could be found out. Similarly, there are accounts where it is remaining dormant for a long time, suddenly gets active, receives a barrage of payments, receipts and debits happen and then it gets again dormant. So these kind of pattern detections were not possible earlier. And for example, even if those accounts where it is detected like it is a salary account, likelihood of it getting classified as a mule account is very low. So some kind of BRE engine is filtering those kind of accounts.

And flagging only returns. So this is a very common problem. And the reason why it is so common is that it is not a very common problem. And it is not a very common problem. And it is not a very common problem. And it is not a very common problem. need to do a enhanced due diligence after this flag is done. We are working closely with I4C and our one limited study which has revealed that those accounts which are predicted by Mule Hunter, banks initially, some of the banks classified them as not Mule after doing the enhanced due diligence. But within a month or two, we start seeing I4C complaints on those very accounts which are flagged and such ratios are ranging up to 60%.

That means it gives us the confidence that the model is identifying correctly the Mule accounts whereas banks constrained by their own branch banking and identification systems, they are not classifying it correctly. So had we predicted, had we done this exercise in some sample, one bank we took as a sample, we could. So we see that around 75 % of the Mule accounts are Mule accounts. So we see that around 75 % of the Mule accounts are Mule accounts. So we see that around 75 % of the Mule accounts are Mule accounts. So we see that around 75 % of the Mule accounts are Mule accounts. So we see that around 75 % of the Mule accounts are Mule accounts. So we see that around 75 % of the Mule accounts are Mule accounts.

So we see that around 75 % of the Mule accounts are Mule accounts. to 100 crores of money could have been prevented if the bank had classified this as a mule on the day zero and frozen those debit freezers. So these are some of the early insights that we are getting and we are building it as we progress. And the future is what we talk about is a digital payments intelligence platform that we are aiming at a real time transaction scoring mechanism means at the time of transactions going through the score would be provided to the banks whether to allow this transaction or not. So this is mule account detection is a once a crime has been committed we are trying to move it to a preventive action.

So this is where again AI is going to help us a lot of technology and working with you know partnering with telecom. Mobile numbers which are suspect. numbers. So those kind of filtering and smart registry is being built. I4C is also providing us insights. So this is an ecosystem building as a public good. We are not only giving directions, but we have soiled our hands in building this tool, which is now getting implemented on scale. But yes, there is a lot of improvements that can be made with the partnership across all the banks. In the Thank you. Thank you. If you are talking about the Supreme Court case, which talked about the digital arrest cases and has formed an expert committee in this.

And Reserve Bank is also a part of that committee. But much before the Supreme Court gave this direction, this initiative was already on. It is not something that post -Supreme Court direction that we have started building this. This work was undertaken almost one and a half years back. Over a period of time, 26 banks have implemented this and more are implementing. And it is a work in progress. It gets refined as we speak. So there are newer, as I said, new initiatives are also in the pipeline. And we are also working alongside the banks how to move from a manual. Based on due diligence procedure. to a hybrid of automated and human intelligence -backed enhanced due diligence process.

That would be the ultimate proof of preventing these frauds and preventing the money, hard -earned money of the gullible citizens.

Amandeep Dhanoa

Thank you, sir. We keep all the questions in answer for the end of the session. Now, quickly, I call upon Sri Harsha Poddar, Indian police service officer and an award -winning innovator in AI -driven policing. Policing and Shri Ram Ganesh, cyber security expert and founder of CyberEye, to present cyber crime enforcement in action. Gentlemen, the floor is yours.

Ram Ganesh

crime and it was handed to an investigating officer, you had a series of supervisory meetings that would take place at the rank of the deputy SP, the additional SP, in order to determine the path of the investigation. Today what is happening is that this co -pilot is able to ingest the FIR, all of the documentations of the investigation, and generate an investigative path that is compliant with the standard operating procedures laid out by that particular state government, in this case Maharashtra, as well as the High Court and Supreme Court judgments that outline what the best practices in that kind of investigation are. So, put broadly, what the co -pilot does is that there are four essential tasks that it does.

After having generated this path, it also sends out a series of routine legal requests that we require for most investigations. These could be asks for telecom data. These could be asks for forensic data. It also makes sense of digital forensics, by which I mean telecom data in organized crimes, as I’m sure for those of you who worked in tax investigation are aware, there are vast volumes of telecom data that we garner, which we are able to analyze using the copilot. And then we also use open source intelligence, which, again, in police, we use a fair amount of. So different open platforms, Facebook, PhonePay, Google Pay, etc., it’s able to garner data from these that’s open source available and is then able to make that a part of the investigation.

Essentially, this is what is happening. You have an adaptive investigation part that is unique to that particular case. So remember, it’s not just an instance where it has spelt out or replicated the SOP for you. It has adapted the SOP and the judicial pronouncements on that particular head of cases and adapted it to that case. So that’s that’s what what it actually does. In terms of in terms of case ingestion and how this exactly works is it ingests the FIR to start with. It also provides victim assistance, for example, unfreezing of accounts and volumes of money that have been frozen in cybercrime cases. It generates case diaries, which is a day to day progress of the investigation itself, provides guided investigation paths, which are compliant, as I said, to standard operating procedures.

And it also profiles people on the basis of open source intelligence. Now, in my own district, as SPF Nagpur Rural, we have trained over 233 investigating officers for this, using which over 467 cases have been investigated. Over the past six months before the launch by by Mr. Satya Nadella and our. Chief Minister. The co -pilot has actually enabled us to win a series of governance awards within the state of Maharashtra as well, but that’s not so important. What’s important here, and I also want to doff my hat a little bit to the training process that’s important. When we are onboarding systems such as this, it’s important for the institution doing so to create space for training. I know, having been a beneficiary of it myself, that the Income Tax Department lays a lot of stress on training from all ranks onwards, something that we can learn within the police department.

But this is something that we stressed upon very, very substantially, and that’s been useful, and has also reduced resistance within organizations in order to onboard it and be able to use it. I’ll end by concluding four basic technologies that are available from the artificial intelligence silo. In Marvel, that’s the kind of technologies that we work upon. First is large language models, which are, I think it was spelled out in the first session, which is essentially artificial intelligence models that have been working with large amounts of text and are able to interact with government in very akin to a human being. The second is graph neural networks, which are artificial intelligence systems that make sense of siloed sets of data and the relational analysis between them.

So, in organized crime, that’s very useful in terms of being able to do a hub and spoke of who’s at the center of that crime. In Maharashtra, we have an act called Makoka, as you might be aware about, where organized crime, you need to be able to find out who the center of the gang is. Third is agentic artificial intelligence, which is co -pilots such as this, which are triggering workflows and actually walking individuals through them. And the last is big data analytics, where there’s structured analysis of large sets of data. That’s the kind of work that we’ve been doing at Marvel and this is an instance of that. I’ll end with that. It’s been a pleasure and a privilege.

Thank you very much. Jai Hind.

Amandeep Dhanoa

Thank you, sir. Now I introduce Sri Avnish Pandey, Executive Director at SEBI and a national voice on technology strategy and cybersecurity governance. Sir, please.

Avneesh Pandey

Thank you. First of all, a very good evening to all of you and it’s indeed a great privilege to be here and I thank CBT for giving this opportunity. For past two days, we have been listening to a lot of AI -based initiatives all over the place, but something that really had stuck with me at SEBI for some time back is The most important is to build capacity in undertaking these AI initiatives. And to that effect, we have truly democratized the AI development within the organization. And I take quite a pride in introducing some of the names which I have here in the crowd. Mr. Sandeep Kriplani, Mr. Rohit Saraf, Vikas Komera, Rajuddin Khan, and Pramit.

Pramit is the youngest of them. And I’ll tell you why this is important is because some of the initiatives that I’m going to present today to you have been handcrafted by these intellectual minds, and they are not from the IT department from SEBI. So that’s very important. It’s truly democratized to that extent. Yeah, so from SEBI’s perspective, we have quite a broad mandate to protect. The interest of the investors. to promote the development and to regulate the securities market. It’s a fairly large mandate that we have. And to that effect, we craft regulations and seek compliances. So compliances are a major part of our regulatory processes. We also conduct investigations and initiate enforcement proceedings from the data that we collect from various sources.

Going forward, we also adjudicate issue directions and libya penalties. Why am I saying this? This is to say that we have varied use cases within the organization where we have started to use the power of AI. There are four use cases that I would like to mention over here that have been kind of doing good in terms of generating valuable output for us. First of all is this. This is the RIDAR, which is a tool. This is the RIDAR. which ensures very proactive compliance for the advertisements that are being issued by a regulated entity specifically mutual funds so the ocean is a very important tool which is able to track the miss the the context that are unregistered and misleading fin influences are putting onto the social media in for much is the workflow intelligence that we have built to ensure that our investigation processes are more efficient and we are able to undertake that activity faster security compliance and audit come security cybersecurity compliance audit is some tools that we have built to ensure that the cyber security compliances that are being sent to say we are well read and we are making some good meaning out of it.

So I’ll take one by one of them. Well I’m slightly cognizant of the time that I have in hand. So first of all is the radar which I said takes care of all the advertisement that mutual fund industries are putting in. The tool basically looks into whether the advertisement which is put in is compliant to the regulatory requirements as mandated by the code of conduct. Some of the non -compliance that this tool is able to capture is illustrated here of which most of the compliance that we have caught is in terms of non -disclosures and not ensuring the disclaimers adequately put. Moving next is Sudarshan and which is trying to combat lot of financial frauds which also includes investment frauds.

So this is a tool which is able to capture all the non -compliance frauds that are part of the securities market domain and we are involved with that. Our media monitoring cell in SEBI has flagged nearly one lakh instances of misleading contents on this platform. To strengthen our approach on this, we built this product called Sudarshan, which is doing a continuous monitoring. It’s a multi -modal tool and works on multiple languages as well. Knowing that some of these scrupulous guys are using the capability of languages to defraud people, it has got enhanced detection capabilities and which we validate against with the data which is present within SEBI. By this we are able to try to figure out the financial misinformation and ensuring the financial information.

integrity. Infomerge as we call a tool that is for our investigation process as you all know the investigation process starts from case initiation, data collation, data analysis and report generation. By using this tool we are able to systematize all the data which is collected from various sources into one format. We are able to look into the company profile designations and financials of a particular company. Also are able to figure out what are the corporate announcers that are announcements that were made during the time of the investigation period. Visualization is the effect by which people are able to see the pattern and the tool has got a very innovative tools to give that finally the report writing.

From one investigating officer to another investigating officer there has been always found a variance. to ensure that we have a standardized mechanism to get a report and get it in an orderly manner. This particular tool lets do the last part of writing the report. Of course, those reports again go for a read as human in loop. Coming to the last system that we have very recently launched, SEBI initiated a cyber resilience and cyber security framework based on which we have started to get a lot of compliances, which means controls, levels, and artifacts that are being submitted for those compliances. So this particular tool autonomously reads those compliances and flags where the audit report has gotten missing.

So we have got a very novel three architectural -based framework so that if one particular model, is hallucinating other models, so that if one particular model is hallucinating other models, so that if one particular model is hallucinating other models, do take care of and give the reasonable meaningful analysis. So what it is ending up as apart from giving a dashboards and real -time visibility it is ensuring that at SEBI we at any point in time are able to do a relative analysis of all our intermediaries and know where they stand in cyber security measures. Yeah so that was a very quick sorry if I’ve been too fast with the other time I was trying to keep pace with the seconds that were clicking thank you

Amandeep Dhanoa

Thank you sir it ended in a very clockwork type manner. Now we are coming to the final technical session. Shukla principal commissioner at CBDD a key architect behind data Analytics Cell and Saksham Nudge Initiative will speak on the use of AI for ease of compliance in tax administration. So, we look forward to your insights.

Shashi Bhushan Shukla

Thank you, Aman. Good evening, everyone. Now, I think we are almost at the closing of this session and we are left with maybe five, seven minutes. But this has been the last session. We can take some more minutes, I think. So, this is the journey of Income Tax Department. The Income Tax Department has been pioneer in the adoption of state of art technology. And we have started using technology quite early. I have given a few examples. So, let us take a look at the graph of last 25 years. so if we see the filing of TDS return started in 2004 followed by filing of returns and then tax net has been launched CPC has started in 2009 and then CPC TDS and that as has been mentioned by Professor Mausham that the income tax department is highly taxpayer service oriented and we show them the financial information which is available with the department we started showing 26 years from 2017 onwards and then we have automated several processes of income tax department and the department has launched online issuance of form 16 the faceless assessment e -filing portal in 2021 and national cyber forensic policy was launched in 2024 and last to last year we started initiative called Nudge which is the non intrusive uses of data to guide and enable taxpayers and in the first session my colleague has talked about insight 2 .0 and there are several projects which are now getting updated with the state of art technology including the uses of artificial intelligence so to enhance the taxpayers experience for filing the tax for the ease of tax compliance so the department is using the technology including the artificial intelligence keeping the taxpayer in the heart of it so if we talk about the data what income tax department has there is a vast data which we have from the several sources and one of for example the pan 80 crore people are already issued pan now it might have reached a little more itrs are filed more than 9 crore 12 crore people are paying taxes and then sft is more than 650 crore data fields we get for the specified financial transactions which are populated in AIS which is a huge data which is available with the department and then we collect data under rule 114B, form 60 is submitted, then a specified financial transaction 61 A is submitted.

We also receive information from foreign jurisdictions. More than 100 countries share the foreign asset information, foreign income information as well from with India. So we receive around 50 lakh pieces of information every year under CRS and FATCA framework which is automatic exchange of information and we also share information in respect of non -residents who are having the foreign assets in India, who are having the assets in India with the respective foreign jurisdictions. It is around 1 crore pieces of information which is transacted. So this is a lot of information which we have, lot of data we have including the assessment order, including the appeal orders. This data can be utilized within our projects which we have inside 2 .0 and ITBA, CPC for generation of intelligence, for better compliance, for the awareness of the taxpayers and for the information of taxpayers for the payment of correct taxes.

So this initiative which is a nudge initiative was started 2 years back where we are using the data which is coming from various sources including from foreign jurisdiction and this is being used for educating the taxpayer, for guiding the taxpayer to comply with the tax laws and to correct their filing, to correct the, declare their correct assets and income. So this NUZ has the seven step strategy which is in the word Saksham which is meaning in Hindi Saksham means empowered in English. So this is basically this strategy is empowering the department as well as empowering the taxpayers for filing the correct tax. So this seven step strategy is basically how we are using the data which is a Sankalan basically compilation and collection of data as we have discussed we have lot of data which is coming from diverse sources.

Then Anushandhan how we are analyzing and doing research over the data for generation of insight and intelligence for the risk identification and how we are acting on the data which is actionable interventions for the targeted outcomes. Then we are doing the communication basically how the taxpayers to inform to the taxpayers that this may be which you need to review. your filing and maybe you will have to change your income or your computation. So this is where we are using the behavioral insight and guiding taxpayers to pay the correct taxes. And at the same time, we are also hand -holding, we are also facilitating them through the fifth step, which is called ASTAK and then ADHIKAR enablement of the taxpayer for the payment of taxes.

We have the legal changes have been brought in the Income Tax Act, where now the taxpayers are allowed to update their ITR by payment of additional taxes and to correct their income. So this is possible now with the payment of additional tax. So now we use this asking the taxpayers, it can be for four years, you can come out with the right taxes at the end. basically it is a preemptive exercise where no punitive action is taken against the taxpayers no penal consequences so the taxpayers are allowed to change their itr which has been filed originally and then the this whole cycle is then completed through evaluation where we take the feedback of taxpayers the responses we receive they are analyzed and all these steps can be further improvised so that the next nudge or when when we can communicate with the taxpayers with a better information and the better communication and this strategy has actually yielded very good result where the taxpayer have responded well it has been received well the taxpayers the trust owned by the department has given a very good result so I have given few cases case studies here some outcomes you which has also been discussed by chairman in his opening remarks So if we see the current foreign asset nudge which has been carried out in the month of December and we have sent messages to the taxpayer stating that you may have some foreign asset which has not been reported in your ITR and the taxpayer have then revised their ITR and 1 .57 lakh taxpayers they have disclosed their foreign assets which is worth 99 ,000 crore.

So this this exercise shows that once the taxpayers are informed that this is what the department knows about and you have missed it while filing your ITR they may come forward and they can declare. So this has resulted into 6 ,540 crore of additional income and 99 ,000 of crore of assets. Similarly we have also taken up few more exercises. The other one which I have mentioned is regarding the bogus donations. And the bogus deductions claimed by the taxpayers by taking certain fake receipts from unrecognized political parties and some of them are maybe the entities, the NGOs, which are not eligible for donation. So here also result has been quite encouraging. 6 .96 lakhs taxpayers, they have revised their ITR and they have withdrawn their claims worth 9879 crore and which has given the department additional taxes of 1758 crore.

And if we see how these campaigns have actually resulted into the behavioral change in the taxpayer. So these two graphs. Explain it. If you see the foreign asset behavior pattern, how the taxpayers have increased the filing of taxpayer foreign asset. Now it has increased from 1 .59 lakhs, which was before the Nudge campaign started. Now it is 4 .7 lakhs. So almost three times increase in a span of two years by this Nudge campaign. And similarly, the claim of deduction has also gone down in last two years. And it has reached to from almost half 7 ,400 crore to almost 4 ,000 crore. So this is what is the power of data and the data analytics, which we do by using the technology.

And as has been discussed in Insight 2 .0 project, that with the use of artificial intelligence and better technology, we will be able to identify the anomalies much faster. And we can Nudge taxpayers maybe at the time of filing of return or much before. So when the return is. process. This also I because there is many representative from law enforcement agencies, I wanted to discuss this particular topic as well. The India is leading one project which is based on the misuse and threats of AI in the in the tax crime and financial crimes. So this it is a 17 countries group which is being led by India and we request all the LEAs if they have come across any misuse or any challenge of AI the risk of AI in their regular working and the administration of their institutes we can they can communicate with us so that we can at the international level we can take it forward in a collaborative manner and we should try to find solution to various problems.

The misuses which have been reported so far are basically the use of generic synthetic identities and deepfake documents and fabrication of sometimes court orders. So these are the AI -assisted misuses which are happening and which are basically a challenge for all law enforcement agencies. The RBI has come out with the Mule Hunter software. Maybe for the synthetic identity identification also we can use AI where it can further support the law enforcement agencies in identifying such misuses where we can take some preemptive measure before basically this attack takes place. So this is, I will request, we will also send communication but this can be kept in mind that this project is going on. And if we talk about the future use of AI in department, basically if we see how we will be able, to enable our taxpayers to pay correct taxes at the right time without any penalty or without any additional tax.

So this is what we are trying in the department that we should use AI in an informative manner. It should also be able to cross validate various data sources so that if there is any anomaly, it can be predicted on real time basis and in a proactive manner. So when I’m saying real time basis, so maybe at the time when the taxpayer is preparing for filing of return, that time we can show the data, the financial data which we have received from third party. At the time of filing of return, we can use prompts where the taxpayers can be informed if they are making any wrongful claims or if they are reporting or not reporting assets, which may be in the knowledge of.

The department and then. once the return is filed before the verification we can further analyze the returns and we can prompt the taxpayers to correct before the processing or after the processing then we can carry out the further the nudge exercise so all this nudge will have a complete 360 degree 360 degree program where we can enable taxpayers right from the beginning to pay the correct taxes so this is what we plan to use the AI where the taxpayer services are concerned and for the administration obviously we are making ourselves capable training our manpower and adapting the technology to to serve better the country and also to collect revenue correctly on time with this I will end and this is the closing thought so it is for everyone to read thank you so much

Amandeep Dhanoa

Thank you, sir. Thank you, sir. And now I invite Shri Mahadevan K, Joint Commissioner of Income Tax for the vote of thanks.

Justice R. Mahadevan

Respected Honorable Chairman, Distinguished Speakers, Eminent Guests, Colleagues and Participants. It is my privilege to propose vote of thanks at the conclusion of this highly interesting session. Today’s deliberations have clearly demonstrated that artificial intelligence is no longer aspirational, it is operational. I begin by expressing sincere gratitude to Honorable Chairman CBDT Shri Ravi Agrawal, sir, for his visionary opening remarks. Particularly, sir highlighted how the new Income Tax Act would be tech -driven to reduce the litigations over interpretations. And sir emphasized the use of AI in ethical manner with ensuring accountability and transparency. Sir said the strategic direction for AI enabled… Sir said the strategic direction for AI -enabled trust -based governance. The session on Project Insight 2 .0 by Mr.

Srinivasan T, Sri Abhishek Kumar and Sri Ramesh Reveru demonstrated how Insight 2 .0 is reshaping the taxpayer’s life cycle through AI -enabled prefiling, conversational chatbots, behavioral nudge, AI -based litigation risk assessment and vulnerability prediction. The vision of a sovereign SLM for the tax domain stands out as a transformative initiative. The session on a Roadmap on AI for Law Enforcement by Professor Mausam outlined AI applications across preventive, predictive and investigative domains using visual, textual, financial and multimodal analytics. He highlighted use cases such as facial recognition, anomaly detection and crime forecasting to enable intelligence -led enforcement. He emphasized human and AI teams. And explainability. bias mitigation and civil liberties as essential safeguards. These aspects brought conceptual clarity and policy depth to the discussion.

The session on AI -driven risk analytics by Mr. Martin Wilcox highlighted how data analytics enhances enforcement through graph analytics, in -database model deployment and leveraging vector stores and multimodal AI for intelligent querying. The transition from a system of record to a system of intelligence was particularly compelling. The session on Maha Crime OS AI by Sri Harshiya Podasa, Sri Ram Ganesh and Sri Vikram Kale powerfully addressed the investigation crisis and showed how AI enables automated crime handle extraction and guided investigation workflows, combined with 360 -degree profiling, integrating with CDR and other tools. Particularly, the emphasis on human -in -the -loop architecture ensures accountability, all -on -set efficiency. The session on AI for ease of compliance by Sri Sashi Bhushan Shukla sir illustrated the difference between AI and AI -driven risk analytics.

The income tax department’s evolution towards AI -driven platform such as Insight 2 .0, ITBA 2 .0 and Saksham Natch. Sir explained how large -scale data integration and cross -validation enable risk -based proactive and real -time compliance support. The focus was on shifting from enforcement -led systems to AI -enabled trust -based voluntary compliance and taxpayer -centric services. The session on Mule Hunter by Sri Sumanthapati highlighted free AI framework with its seven sutras, six pillars and structured recommendations balancing innovation and risk mitigation. The presentation on Mule Hunter demonstrated how advanced ML models, graph analytics and real -time risk scoring are strengthening Mule account deductions. The proposed DPIP collaborative platform further reflects a forward -looking ecosystem -wide approach to AI -enabled financial integrity and supervisory resilience.

The session on AI -driven regulatory enforcement by Sri Avinash Pandey highlighted how SEBI is operationalizing AI across enforcement including proactive compliance review, real -time detection of misleading, financial content and its influencers. and AI -driven cybersecurity audit compliance. These initiatives reflect how AI can strengthen investor protection while ensuring regulatory prudence. A special word of appreciation to Srimati Amandeep Dhanoa for her engaging and energizing moderation. Today’s session reaffirmed that AI enhances risk intelligence, it improves service delivery, it strengthens regulatory oversight and it enables data -driven governance. On behalf of CBDT, I extend heartfelt gratitude to all speakers, institutions, organizations and participants. A special word of appreciation to the principal CCIT Delhi headquarters team and the DGIT investigation team, Delhi, for the dedicated support and medicalist coordination in organizing this event.

Thank you all for making this session impactful and forward -looking. With this, I formally conclude the session. Thank you all. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (37)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Amandeep Dhanoa is an Indian Revenue Service officer of the 2018 batch and served as moderator of the symposium.”

The knowledge base lists Amandeep Dhanoa as an Indian Revenue Service Officer of the 2018 batch and as the moderator of the symposium [S3].

Confirmedhigh

“Shri Ravi Agrawal is the Chairman of the Central Board of Direct Taxes and an Indian Revenue Service officer of the 1988 batch with over three decades of experience.”

The knowledge base confirms Ravi Agrawal’s role as Chairman of the Central Board of Direct Taxes and his IRS 1988 batch background with more than thirty years of service [S3].

Additional Contextmedium

“Responsible AI deployment requires high‑quality shareable data, secure systems, clear accountability, strong safeguards and continuous training.”

S109 outlines key AI‑readiness requirements such as cataloguing data in machine‑readable form, security, governance and continuous skill development, which expand on the prerequisites mentioned in the report.

Additional Contextmedium

“AI can dramatically accelerate code development, exemplified by the Chairman generating a functional training‑module code in five to six hours.”

S111 describes a senior engineer building a complex service in 14 days using generative AI tools, illustrating comparable speed gains from AI‑assisted coding.

Additional Contextlow

“AI enables rapid large‑scale data processing, such as deduplicating billions of images in a short time.”

S25 reports that AI deduplicated 90 crore photographs in 51 hours, providing concrete evidence of the high‑speed processing capability referenced in the report.

Additional Contextlow

“Responsible AI deployment should consider agentic behavior, safeguards and human‑in‑the‑loop control.”

S108 discusses responsible deployment of AI agents, emphasizing autonomy, reasoning, and safety measures, which adds nuance to the report’s discussion of AI safeguards.

External Sources (112)
S1
https://dig.watch/event/india-ai-impact-summit-2026/ai-driven-enforcement_-better-governance-through-effective-compliance-services — Thank you. Thank you, sir, for setting the tone and direction so clearly. Now, as we begin with category one, we turn to…
S2
Announcement of New Delhi Frontier AI Commitments — -Abhishek: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified
S4
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — -Suvendu Pati- Chief General Manager and Head of FinTech at the Reserve Bank of India
S5
https://dig.watch/event/india-ai-impact-summit-2026/ai-driven-enforcement_-better-governance-through-effective-compliance-services — All the sessions being so interesting that I see most of the audience sticking to their seats. So for the first session …
S6
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — -Ram Ganesh- Cyber security expert and founder of CyberEye
S7
Announcement of New Delhi Frontier AI Commitments — -Ganesh: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified
S8
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — -Amandeep Dhanoa- Indian Revenue Service Officer of 2018 batch, Moderator of the symposium -Justice R. Mahadevan- Joint…
S9
Keynote-Demis Hassabis — -Demis Hassabis: Role – Co-founder and CEO of Google DeepMind; Titles – Sir, Nobel laureate; Areas of expertise – Artifi…
S10
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Ashwini Vaishnaw- Role/Title: Honorable Minister (appears to be instrumental in India’s semiconductor industry developm…
S11
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — -Ramesh Revuru- Global Head of Engineering at LTI Mindtree
S12
https://dig.watch/event/india-ai-impact-summit-2026/ai-driven-enforcement_-better-governance-through-effective-compliance-services — insights who has been instrumental in shaping the income tax department’s digital ecosystem through project insight and …
S13
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — -T. Srinivasan- Technology Lead at LTI Mindtree; brings three decades of enterprise technology leadership
S14
Journal of International Commerce and Economics — – – Online Casino City. 2008. Costa Rica, Antigua file for WTO arbitration. Press Release, February 1. http://online….
S15
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — -Amandeep Dhanoa- Indian Revenue Service Officer of 2018 batch, Moderator of the symposium
S16
https://dig.watch/event/india-ai-impact-summit-2026/ai-driven-enforcement_-better-governance-through-effective-compliance-services — All the sessions being so interesting that I see most of the audience sticking to their seats. So for the first session …
S17
DC-Sustainability Data, Access &amp; Transparency: A Trifecta for Sustainable News | IGF 2023 — Gabriela Ramos:Thank you very much, Benga. Words to live by there, and I’m sure we’ll go back to each of those three poi…
S18
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — -Shri Ravi Agrawal- Chairman, Central Board of Direct Taxes; Indian Revenue Service Officer of 1988 batch with over thre…
S19
Defending the Cyber Frontlines / Davos 2025 — – Ravi Agrawal: Editor-in-Chief of Foreign Policy Magazine, host of FP Live Ravi Agrawal: Hi, everyone. My name is Ra…
S21
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — -Avneesh Pandey- Executive Director at SEBI; national voice on technology strategy and cybersecurity governance
S23
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — -Shashi Bhushan Shukla- Principal Commissioner at CBDT; key architect behind data Analytics Cell and Saksham Nudge Initi…
S24
S25
Driving Indias AI Future Growth Innovation and Impact — How do you? Build the trust like we just discussed to ensure that there is that. the ecosystem knows that this entire pr…
S26
Scaling Enterprise-Grade Responsible AI Across the Global South — “And for that, the way we think about this is we have an agentic orchestration framework.”[11]. “you have to train the w…
S27
Secure Finance Risk-Based AI Policy for the Banking Sector — Compliance functions increasingly rely on automated pattern recognition, while adaptive cybersecurity models respond to …
S28
Europol report highlights growing threat of financial and economic crime in the EU — Europol has released its first-everthreat assessment on financial and economic crime,shedding light on a clandestine sys…
S29
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — -Regulatory Approach and Framework: India’s Reserve Bank of India (RBI) has adopted a progressive, principles-based appr…
S30
AI sandboxes pave path for responsible innovation in developing countries — At theInternet Governance Forum 2025in Lillestrøm, Norway, experts from around the worldgatheredto examine how AI sandbo…
S31
HIGH LEVEL LEADERS SESSION IV — In conclusion, while AI and new technologies offer immense potential, it is crucial to address concerns such as inequali…
S32
Trusted Connections_ Ethical AI in Telecom &amp; 6G Networks — “It’s a foundational capability shaping how networks are designed, operated and experienced by users.”[16]. “But it is t…
S33
How to make AI governance fit for purpose? — This insight fundamentally reframes the governance challenge from a regulatory compliance issue to a trust-building exer…
S34
Taxing Tech Titans: Policy Options for the Global South | IGF 2023 WS #443 — The analysis explores various perspectives on international taxation and global tax rules. One significant aspect is the…
S35
THE DRAFT MINISTERIAL REPORT PREFACE — | E EL LE EC CT TR RO ON NI IC C C CO OM MM ME ER RC CE E: : T TA AX XA AT TI IO ON N F FR RA AM ME EW WO OR RK K C CO O…
S36
Artificial intelligence (AI) – UN Security Council — On the other hand, the integration of AI in international law also presents significant opportunities. It offers the pot…
S37
AI for Humanity: AI based on Human Rights (WorldBank) — In addition to surveillance risks, AI-powered tools used to understand political tension can disrupt democratic processe…
S38
WS #205 Contextualising Fairness: AI Governance in Asia — – Tejaswita Kharel: Project Officer at the Center for Communication Governance at the National University Delhi. Works o…
S39
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — “Firstly, what decisions that AI must not be delegated to must always remain human.”[17]. “Human control has to be insti…
S40
European Central Bank advocates monitoring and regulation of AI in finance — The European Central Bank(ECB) has issued a call for increased vigilance and potential regulation regarding the use of A…
S41
Overview of AI policy in 15 jurisdictions — Summary China remains a global leader in AI, driven by significant state investment, a vast tech ecosystem and abundant …
S42
From principles to practice: Governing advanced AI in action — Juha Heikkila: Thank you. Thank you very much. It’s indeed a great pleasure to be here and to be a member of this panel….
S43
State of play of major global AI Governance processes — Juha Heikkila:Thank you very much, and thank you very much indeed for the invitation to be on this panel. So indeed the …
S44
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — – Ethical implementation and trust preservation – Moving from reactive detection to proactive prevention – The need fo…
S45
Unveiling Trade Secrets: Exploring the Implications of trade agreements for AI Regulation in the Global South — However, adopting a rights-based approach to AI could ensure greater accountability and prevent harm to workers. In this…
S46
UK plans AI systems to monitor offenders and prevent crimes before they occur — The UK governmentis expanding its use of AI across prisons, probation and courts to monitor offenders, assess risk and p…
S47
AI and international peace and security: Key issues and relevance for Geneva — Conduct Regular Risk Assessments:Ongoing assessments of the risks associated with AI applications in military contexts s…
S48
How AI Drives Innovation and Economic Growth — The speakers show broad agreement on AI’s transformative potential for development but significant disagreements on impl…
S49
The Expanding Universe of Generative Models — In terms of autonomous agents, there is optimism towards their development. The report mentions ongoing work on language…
S50
Defying Cognitive Atrophy in the Age of AI: A World Economic Forum Stakeholder Dialogue — Moderate to high disagreement with significant implications. While speakers agreed on the importance of human developmen…
S51
How AI Drives Innovation and Economic Growth — The tone was notably optimistic yet pragmatic, described as representing “hope” rather than the “fear” that characterize…
S52
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — Mariana Rozo-Pan: Thank you, Sophie. And hi, everyone. Good morning, good afternoon, good evening. We are very excited a…
S53
Secure Finance Risk-Based AI Policy for the Banking Sector — “So just like my colleague said, we already have an intraoperable sandbox across regulators and it is on tap.”[11]. “Any…
S54
WS #35 Unlocking sandboxes for people and the planet — Adam Zable: Thank you. Can you hear me? Okay, fantastic. First, thanks so much for inviting me to speak. As Bertrand s…
S55
Policy Network on Artificial Intelligence | IGF 2023 — Audience:Hi, Ansgar Kuna from EY. In AI, as with a number of these digital technologies that are arising, we’re seeing a…
S57
Open Forum #3 Cyberdefense and AI in Developing Economies — Capacity Building and Human Resources Development | Legal and regulatory Effective capacity building requires training…
S58
Why science metters in global AI governance — “We need to find standardized evaluation methodologies that work across different regulatory contexts.”[101]. “We need c…
S59
Embracing the future of e-commerce and AI now (WEF) — In conclusion, the implementation of advanced technology, particularly AI, in Cambodia’s customs system brings numerous …
S60
THE 2016 NATIONAL TRADE ESTIMATE REPORT — Italy: Some U.S. companies claim to have been adversely targeted by the Revenue Authority by virtue of the fact that …
S61
AI as critical infrastructure for continuity in public services — Resilience, data control, and secure compute are core prerequisites for trustworthy AI. Systems must stay operational an…
S62
Survive the AI jargon tsunami: Find shelter in your mother tongue — In addition to loss of meaning by inflated AI language, there is another deeper tension between the deterministic langua…
S63
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — This comment addresses a fundamental tension in AI deployment – the mismatch between probabilistic AI behavior and deter…
S64
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — And as has been discussed in Insight 2 .0 project, that with the use of artificial intelligence and better technology, w…
S65
Taxing Tech Titans: Policy Options for the Global South | IGF 2023 WS #443 — The analysis explores various perspectives on international taxation and global tax rules. One significant aspect is the…
S66
Indirect Taxation of E-Commerce: Implications for developing countries (UNCTAD) — Adjusting VAT regimes to the challenges of e-commerce and ensuring global consistency are crucial while also promoting i…
S67
THE DRAFT MINISTERIAL REPORT PREFACE — | E EL LE EC CT TR RO ON NI IC C C CO OM MM ME ER RC CE E: : T TA AX XA AT TI IO ON N F FR RA AM ME EW WO OR RK K C CO O…
S68
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — – Police transcription services streamlining administrative processes Seong Ju Park: Thank you, Mr Moderator. So before…
S69
Artificial intelligence (AI) – UN Security Council — On the other hand, the integration of AI in international law also presents significant opportunities. It offers the pot…
S70
Ethical AI_ Keeping Humanity in the Loop While Innovating — Sometimes we might be wrong and risk unchangeable effects. So we need to build a balance that doesn’t hinder innovation,…
S71
Law enforcement embraces AI for efficiency amid rising privacy concerns — Law enforcement agenciesincreasingly leverage AI across critical functions, from predictive policing, surveillance and f…
S72
https://dig.watch/event/india-ai-impact-summit-2026/ai-driven-enforcement_-better-governance-through-effective-compliance-services — The vision of a sovereign SLM for the tax domain stands out as a transformative initiative. The session on a Roadmap on …
S73
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — -Regulatory Approach and Framework: India’s Reserve Bank of India (RBI) has adopted a progressive, principles-based appr…
S74
Secure Finance Risk-Based AI Policy for the Banking Sector — Manchala highlighted the RBI’s recognition that AI technology is inherently probabilistic and may experience lapses desp…
S75
Secure Talk Using AI to Protect Global Communications &amp; Privacy — An audience question about government initiatives revealed evolving regulatory responses. The Reserve Bank of India has …
S76
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Artificial intelligence (AI) is reshaping the corporate governance framework and business processes, revolutionizing soc…
S77
European Central Bank advocates monitoring and regulation of AI in finance — The European Central Bank(ECB) has issued a call for increased vigilance and potential regulation regarding the use of A…
S78
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S79
Governments, Rewired / Davos 2025 — The overall tone was optimistic and forward-looking, with speakers highlighting the transformative potential of technolo…
S80
AI as critical infrastructure for continuity in public services — The discussion maintained a collaborative and constructive tone throughout, with participants building on each other’s p…
S81
AI Governance Dialogue: Presidential address — The tone remained consistently optimistic and collaborative throughout both presentations. President Karis spoke with co…
S82
Opening — The overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological ch…
S83
Powering the Technology Revolution / Davos 2025 — The tone was generally optimistic and forward-looking, with panelists highlighting opportunities for innovation and prog…
S84
Next-Gen Industrial Infrastructure / Davos 2025 — The tone was largely optimistic and forward-looking, with speakers enthusiastically sharing their visions and initiative…
S85
Host Country Open Stage — The tone throughout the discussion was consistently optimistic and solution-oriented. All presenters maintained a profes…
S86
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — The discussion maintained an optimistic and collaborative tone throughout, with speakers consistently emphasizing human …
S87
AI: Lifting All Boats / DAVOS 2025 — The tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunit…
S88
WS #219 Generative AI Llms in Content Moderation Rights Risks — The discussion maintained a consistently serious and concerned tone throughout, with speakers demonstrating deep experti…
S89
How AI Is Transforming Indias Workforce for Global Competitivene — There are risks of over-automation without adequate human oversight and potential bias issues
S90
AI, Data Governance, and Innovation for Development — The tone of the discussion was largely optimistic and solution-oriented. Speakers acknowledged significant challenges bu…
S91
Skilling and Education in AI — The tone was cautiously optimistic throughout. Speakers acknowledged both the tremendous opportunities AI presents for I…
S92
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S93
Resilient infrastructure for a sustainable world — The tone was professional and collaborative throughout, with speakers building on each other’s points constructively. Th…
S94
AI and Data Driving India’s Energy Transformation for Climate Solutions — The tone was collaborative and solution-oriented throughout, with speakers building on each other’s insights rather than…
S95
Building Population-Scale Digital Public Infrastructure for AI — The tone is optimistic and collaborative throughout, with speakers sharing concrete examples of successful implementatio…
S96
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S97
Closing Session  — The tone throughout the discussion was consistently formal, collaborative, and optimistic. It maintained a celebratory y…
S98
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S99
World Economic Forum Annual Meeting Closing Remarks: Summary — The tone is consistently positive, celebratory, and grateful throughout the discussion. It begins with formal appreciati…
S100
Scaling Innovation Building a Robust AI Startup Ecosystem — The tone was consistently celebratory, appreciative, and inspirational throughout. It began formally with the awards cer…
S101
Ad Hoc Consultation: Friday 2nd February, Morning session — During the session, chaired by Mr. Chair, the speaker began by extending greetings to colleagues and esteemed delegates …
S102
Opening of the session — These key comments fundamentally shaped the discussion by establishing three major fault lines: (1) the tension between …
S103
Opening of the session — Chair:Thank you, Russian Federation. I was going to say that after we adopted the program of work, it has always been th…
S104
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — A critical theme throughout the discussion was the need for problem-driven rather than technology-driven approaches. Gho…
S105
9821st meeting — Yann Lecun argues that AI will enhance human intelligence and speed up scientific advancements. This could lead to signi…
S106
Shaping AI to ensure Respect for Human Rights and Democracy | IGF 2023 Day 0 Event #51 — Artificial Intelligence (AI) carries the potential to revolutionise various sectors worldwide, due to its capacities for…
S107
Keynote-Brad Smith — “We need to look at AI as the next great generator for human curiosity.”[11]. “Human capability is neither fixed nor fin…
S108
WS #283 AI Agents: Ensuring Responsible Deployment — Will Carter from Google defined agentic AI by two key characteristics: the ability to perform complex reasoning and take…
S109
Safe and Responsible AI at Scale Practical Pathways — Rohit Bardawaj from India’s Ministry of Statistics stressed the importance of establishing a uniform definition and fram…
S110
Keynote-Martin Schroeter — Building confidence and security in the use of ICTs | Artificial intelligence For AI to be trusted in production, it mu…
S111
Keynote-Vishal Sikka — “So if you are counting, that is about more than 250 times improvement in productivity.”[1]. “Recently, he rebuilt that …
S112
High Level Leaders Session 2 | IGF 2023 — Deborah Steele:you would take your seats please. Hello and welcome to High-Level Session 2, Evolving Trends in Misinform…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Abhishek Kumar
1 argument112 words per minute194 words103 seconds
Argument 1
Quick, accurate taxpayer information and AI‑based litigation risk assessment (Abhishek Kumar)
EXPLANATION
Abhishek Kumar explains that AI can provide taxpayers with rapid and precise information, improving their compliance experience. He also highlights that AI can assess litigation risk by tagging issues in assessment and appellate orders and predicting case vulnerability, thereby reducing the need for court proceedings.
EVIDENCE
He states that the first key step is the quick availability of accurate information to taxpayers, followed by more effective NERJ campaigns through AI infusion, and that AI can tag issues in assessment, appellate and judicial orders, link judicial orders and predict case vulnerability to potentially retract litigation [88-97].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External sources describe the first step as rapid, accurate information for taxpayers and AI tagging of assessment and appellate orders to predict case vulnerability, confirming the claimed benefits [S1].
MAJOR DISCUSSION POINT
AI‑enabled taxpayer services and risk assessment
AGREED WITH
Shri Ravi Agrawal, Shashi Bhushan Shukla, Martin Wilcox
S
Shashi Bhushan Shukla
1 argument137 words per minute1993 words867 seconds
Argument 1
“Nudge” campaigns prompting voluntary disclosures, yielding billions in additional revenue (Shashi Bhushan Shukla)
EXPLANATION
Shukla describes how targeted nudges, delivered through AI‑driven communications, encourage taxpayers to voluntarily correct their filings. These campaigns have led to large numbers of disclosures of foreign assets and bogus deductions, generating substantial additional tax revenue.
EVIDENCE
He cites the foreign-asset nudge in December that prompted 1.57 lakh taxpayers to disclose assets worth ₹99 000 crore, resulting in ₹6 540 crore of additional income, and a bogus-donation campaign where 6.96 lakh taxpayers withdrew claims worth ₹9 879 crore, adding ₹1 758 crore in taxes [560-583].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Targeted AI-driven nudges have been reported to trigger over 1.11 crore updated returns with a revenue impact exceeding ₹8,800 crore, supporting the revenue gains described [S3].
MAJOR DISCUSSION POINT
AI‑driven behavioral nudges for revenue enhancement
AGREED WITH
Shri Ravi Agrawal, T. Srinivasan, Avneesh Pandey
S
Shri Ravi Agrawal
1 argument121 words per minute1337 words658 seconds
Argument 1
New Income Tax Act 2025 creates a technology‑driven ecosystem; AI reduces interpretation disputes and litigation (Shri Ravi Agrawal)
EXPLANATION
Ravi Agrawal outlines that the upcoming Income Tax Act 2025 will shift the tax administration to a technology‑centric model, where AI‑enabled algorithms interpret the simplified language of the law. This is expected to minimise ambiguities, lower litigation and enhance tax certainty.
EVIDENCE
He notes that the new act simplifies language and procedures, reduces interpretation ambiguity and will be supported by AI algorithms that minimise scope for differing interpretations, thereby creating a positive environment for reducing litigation and enhancing trust-based compliance [32-34].
MAJOR DISCUSSION POINT
Legislative reform enabling AI‑driven tax administration
AGREED WITH
T. Srinivasan, Suvendu Pati, Martin Wilcox
R
Ramesh Revuru
1 argument145 words per minute421 words173 seconds
Argument 1
Blueverse/Bharatverse multi‑agent platform enabling rapid, deterministic AI solutions for CBDT (Ramesh Revuru)
EXPLANATION
Revuru presents the Blueverse platform, an agentic system that allows rapid construction of multi‑agent solutions for the tax department. An Indianised version, Bharatverse, is being launched to provide pre‑built layers (foundational models, data, knowledge, orchestration, consumption) that ensure deterministic outcomes for tax enforcement.
EVIDENCE
He describes Blueverse as an agentic platform with five pre-built layers enabling faster multi-agent development, and explains that Bharatverse is the Indianised version purpose-built for CBDT, emphasizing the need for ‘right action’ to make generative AI deterministic for tax use cases [110-117].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The symposium notes the launch of an Indianised version of Blueverse called Bharatverse, purpose-built for the CBDT with pre-built layers for deterministic outcomes [S3].
MAJOR DISCUSSION POINT
Platform for deterministic, multi‑agent AI in tax administration
DISAGREED WITH
Ravi Agrawal, Professor Mausam
T
T. Srinivasan
1 argument204 words per minute1420 words416 seconds
Argument 1
Sovereign large language model built with LoRA‑adapted SLMs for secure, domain‑specific tax intelligence (T. Srinivasan)
EXPLANATION
Srinivasan explains the creation of a sovereign LLM tailored to tax data by adapting small language models (SLMs) using LoRA, which requires only a fraction of full‑model training. The approach keeps data within secure, government‑controlled environments and adds domain‑specific knowledge for tax intelligence.
EVIDENCE
He details that the system uses LoRA to adapt SLMs at 1-2 % of full training cost, incorporates vetted tax data, employs vector databases for retrieval and citation, and builds an ontology covering sections, precedents and compliance rules to produce a focused, multilingual tax intelligence model [129-156].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A sovereign tax-domain LLM using LoRA to adapt small language models is described, emphasizing low training cost and data staying within secure government environments [S3].
MAJOR DISCUSSION POINT
Secure, domain‑specific LLM for tax administration
AGREED WITH
Shri Ravi Agrawal, Suvendu Pati, Martin Wilcox
DISAGREED WITH
Martin Wilcox
M
Martin Wilcox
2 arguments181 words per minute754 words249 seconds
Argument 1
“Bring‑Your‑Own‑Model” approach and in‑warehouse inference delivering 25× faster scoring at scale (Martin Wilcox)
EXPLANATION
Wilcox describes Teradata’s BYOM capability that lets organisations import models trained in any framework and run inference directly inside the data warehouse. This eliminates data movement and yields up to 25‑fold speed improvements for large‑scale scoring, enabling real‑time decision making.
EVIDENCE
He gives the example of Brazil’s Secredi where income-estimation models run 25 times faster when executed inside the parallel data warehouse, turning a once-daily run into an hourly one, and stresses that the approach brings models to production rather than just training them [324-347].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Teradata’s BYOM capability enabled a 25-fold speedup for income-estimation models when run inside a parallel data warehouse, turning daily runs into hourly ones [S3].
MAJOR DISCUSSION POINT
In‑warehouse model deployment for high‑performance AI
AGREED WITH
Shri Ravi Agrawal, T. Srinivasan, Suvendu Pati
Argument 2
Graph‑based, multimodal AI risk analytics accelerate detection of financial crime at national scale (Martin Wilcox)
EXPLANATION
Wilcox highlights that modern AI risk analytics combine graph analytics with multimodal data (images, audio, text) to uncover complex financial crime networks. Performing graph computations inside the warehouse avoids sampling errors and scales to national‑level data volumes.
EVIDENCE
He notes that building graphs at India scale is an O(N²) problem, requiring scalable systems that bring complex graph algorithms to the data warehouse rather than extracting samples, and that multimodal data integration expands analytical possibilities [318-327].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion highlights the need for scalable graph analytics inside the warehouse to handle O(N²) complexity at national scale, integrating multimodal data for financial crime detection [S3].
MAJOR DISCUSSION POINT
Scalable graph and multimodal analytics for financial crime detection
AGREED WITH
Shri Ravi Agrawal, Abhishek Kumar, Shashi Bhushan Shukla
S
Suvendu Pati
1 argument146 words per minute1812 words740 seconds
Argument 1
Seven AI “sutras” and sandbox model adopted nationally to ensure responsible AI use (Suvendu Pati)
EXPLANATION
Pati explains that the RBI’s AI committee formulated seven guiding sutras and 26 recommendations, which have been adopted by the Indian government as a national AI governance framework. An AI sandbox is also being created to allow entities to experiment safely with AI while addressing compute and data constraints.
EVIDENCE
He outlines the committee’s report with seven sutras and six pillars, their adoption by the government on 5 Nov, and the establishment of an AI sandbox that provides cross-sectoral, anonymised data and compute resources to support responsible AI experimentation [371-390].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s RBI has formalised seven AI sutras and related recommendations, which have been adopted by the government, and an AI sandbox is being created to enable safe experimentation [S29][S30].
MAJOR DISCUSSION POINT
National AI governance principles and sandbox for responsible innovation
AGREED WITH
Shri Ravi Agrawal, T. Srinivasan, Martin Wilcox
P
Professor Mausam
1 argument190 words per minute2446 words769 seconds
Argument 1
Emphasis on accountability, human‑in‑the‑loop oversight, and bias mitigation to preserve trust (Professor Mausam)
EXPLANATION
Mausam stresses that AI systems must incorporate clear accountability, retain human oversight, and actively mitigate algorithmic bias to maintain public trust. He warns that unchecked AI can produce unfair outcomes, especially in diverse societies, and advocates for safeguards and transparent processes.
EVIDENCE
He warns that autonomous AI could erode trust, cites examples of bias against African-American people in judicial settings, and calls for human-in-the-loop validation, bias monitoring, and careful data handling to protect civil liberties [291-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for transparent, accountable AI with human-in-the-loop validation and bias mitigation are echoed in broader governance discussions on trustworthy AI [S32][S33].
MAJOR DISCUSSION POINT
Ethical safeguards and human oversight in AI deployment
AGREED WITH
Shri Ravi Agrawal, Justice R. Mahadevan, Ram Ganesh
J
Justice R. Mahadevan
1 argument144 words per minute668 words276 seconds
Argument 1
Trust‑based, ethical AI operationalisation as a cornerstone of modern governance (Justice R. Mahadevan)
EXPLANATION
Justice Mahadevan commends the symposium for demonstrating that AI is now operational and emphasizes that its deployment must be ethical, accountable and transparent to build trust‑based governance. He links AI’s role to reducing litigation and fostering voluntary compliance.
EVIDENCE
In his vote of thanks he notes that AI is no longer aspirational but operational, highlights the new Income Tax Act’s tech-driven approach to reduce litigation, and stresses the need for ethical AI with accountability and transparency [617-623].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The symposium stresses ethical, accountable, and transparent AI deployment to reduce litigation and foster voluntary compliance, aligning with the speaker’s points [S3][S31].
MAJOR DISCUSSION POINT
Ethical, trust‑based AI as a foundation for governance
AGREED WITH
Shri Ravi Agrawal, Professor Mausam, Ram Ganesh
R
Ram Ganesh
1 argument158 words per minute812 words306 seconds
Argument 1
AI co‑pilot ingests FIRs, generates compliant investigative pathways, and fuses open‑source intelligence (Ram Ganesh)
EXPLANATION
Ganesh describes a co‑pilot AI tool that automatically reads FIRs, creates investigation road‑maps aligned with SOPs and judicial pronouncements, and pulls in telecom, forensic and open‑source data to guide investigators. The system also assists with victim support and case documentation.
EVIDENCE
He explains that the co-pilot ingests FIRs, produces compliant investigative paths, issues routine legal requests, analyses telecom and forensic data, gathers open-source intelligence from platforms like Facebook and Google Pay, and generates case diaries and victim-assistance actions, having been used by 233 officers for 467 cases [464-482].
MAJOR DISCUSSION POINT
AI‑assisted investigative workflow for cybercrime
AGREED WITH
Shri Ravi Agrawal, Professor Mausam, Justice R. Mahadevan
A
Avneesh Pandey
1 argument133 words per minute1081 words485 seconds
Argument 1
SEBI’s AI suite (RIDAR, Sudarshan, Infomerge) automates ad compliance, fraud detection, and cyber‑audit analysis (Avneesh Pandey)
EXPLANATION
Pandey outlines SEBI’s suite of AI tools: RIDAR monitors advertising compliance, Sudarshan detects financial fraud across languages, and Infomerge streamlines investigation data collation and reporting. These systems automate detection, provide real‑time dashboards, and ensure cyber‑audit compliance.
EVIDENCE
He details RIDAR’s role in flagging non-disclosure in mutual-fund ads, Sudarshan’s multimodal, multilingual fraud detection, and Infomerge’s ability to ingest diverse data, generate structured reports and flag missing audit artifacts, all operating with human-in-the-loop validation [524-549].
MAJOR DISCUSSION POINT
AI‑driven compliance and fraud detection tools at SEBI
AGREED WITH
Shri Ravi Agrawal, T. Srinivasan, Shashi Bhushan Shukla
A
Amandeep Dhanoa
1 argument59 words per minute1133 words1151 seconds
Argument 1
Moderator frames the symposium, linking industry, academia, and regulators to co‑create AI‑enabled governance (Amandeep Dhanoa)
EXPLANATION
Dhanoa, as moderator, sets the agenda by welcoming participants, outlining the two‑track structure (industry/academia and regulatory bodies), and emphasizing the collaborative role of each ecosystem in shaping AI‑driven governance. She ensures smooth transitions between sessions and keeps the programme on schedule.
EVIDENCE
She thanks the chairman, introduces the two categories, calls upon speakers from industry and academia, and later invites the regulatory panel, while also reminding speakers to keep within time limits [78-84] and later [354-362].
MAJOR DISCUSSION POINT
Facilitating multi‑stakeholder dialogue on AI in governance
Agreements
Agreement Points
AI‑driven enforcement and compliance improves revenue, reduces disputes and litigation
Speakers: Shri Ravi Agrawal, Abhishek Kumar, Shashi Bhushan Shukla, Martin Wilcox
New Income Tax Act 2025 creates a technology‑driven ecosystem; AI reduces interpretation disputes and litigation (Shri Ravi Agrawal) Quick, accurate taxpayer information and AI‑based litigation risk assessment (Abhishek Kumar) “Nudge” campaigns prompting voluntary disclosures, yielding billions in additional revenue (Shashi Bhushan Shukla) Graph‑based, multimodal AI risk analytics accelerate detection of financial crime at national scale (Martin Wilcox)
All four speakers stress that AI can provide taxpayers with rapid, accurate information, assess litigation risk, deliver targeted nudges that induce voluntary disclosures, and use advanced risk-analytics (including graph and multimodal data) to detect financial crime, thereby increasing revenue and lowering disputes or litigation. [32-34][88-97][560-583][318-327]
POLICY CONTEXT (KNOWLEDGE BASE)
The AI-Driven Enforcement report highlights that AI-enabled compliance can boost revenue collection and lower dispute and litigation costs through more effective governance [S44].
AI systems must incorporate human‑in‑the‑loop, clear accountability and ethical safeguards to preserve public trust
Speakers: Shri Ravi Agrawal, Professor Mausam, Justice R. Mahadevan, Ram Ganesh
New Income Tax Act 2025 creates a technology‑driven ecosystem; AI reduces interpretation disputes and litigation (Shri Ravi Agrawal) Emphasis on accountability, human‑in‑the‑loop oversight, and bias mitigation to preserve trust (Professor Mausam) Trust‑based, ethical AI operationalisation as a cornerstone of modern governance (Justice R. Mahadevan) AI co‑pilot ingests FIRs, generates compliant investigative pathways, and fuses open‑source intelligence (Ram Ganesh)
These speakers converge on the need for AI deployments to be accountable, include human oversight, mitigate bias and respect ethical norms, ensuring that AI augments rather than replaces human decision-making. [36-44][291-304][617-623][495-498]
POLICY CONTEXT (KNOWLEDGE BASE)
Ethical implementation and trust preservation, including human-in-the-loop controls, are core recommendations of the AI-Driven Enforcement guidance [S44].
Building technical and organisational capacity is essential for effective AI adoption in tax and regulatory domains
Speakers: Shri Ravi Agrawal, T. Srinivasan, Shashi Bhushan Shukla, Avneesh Pandey
New Income Tax Act 2025 creates a technology‑driven ecosystem; AI reduces interpretation disputes and litigation (Shri Ravi Agrawal) Sovereign large language model built with LoRA‑adapted SLMs for secure, domain‑specific tax intelligence (T. Srinivasan) “Nudge” campaigns prompting voluntary disclosures, yielding billions in additional revenue (Shashi Bhushan Shukla) SEBI’s AI suite (RIDAR, Sudarshan, Infomerge) automates ad compliance, fraud detection, and cyber‑audit analysis (Avneesh Pandey)
All four emphasize that without dedicated capacity-building-training staff, developing domain-specific models, and democratizing AI development-AI initiatives cannot succeed. They cite ongoing training programmes, low-cost model adaptation, and internal skill development as key enablers. [45-52][129-156][555-562][506-513]
POLICY CONTEXT (KNOWLEDGE BASE)
Capacity-building is repeatedly stressed in policy papers on AI governance, such as the AI-Driven Enforcement roadmap, the Digital Public Infrastructure capacity-building brief, and UN-IGF discussions on AI expertise development [S44][S56][S57][S58].
Secure, sovereign data handling and robust data‑governance frameworks are critical for AI in public finance
Speakers: Shri Ravi Agrawal, T. Srinivasan, Suvendu Pati, Martin Wilcox
New Income Tax Act 2025 creates a technology‑driven ecosystem; AI reduces interpretation disputes and litigation (Shri Ravi Agrawal) Sovereign large language model built with LoRA‑adapted SLMs for secure, domain‑specific tax intelligence (T. Srinivasan) Seven AI “sutras” and sandbox model adopted nationally to ensure responsible AI use (Suvendu Pati) “Bring‑Your‑Own‑Model” approach and in‑warehouse inference delivering 25× faster scoring at scale (Martin Wilcox)
These speakers agree that AI must operate on data that remains within secure, sovereign environments, supported by national AI governance (sutras, sandbox) and technical solutions that avoid data movement (in-warehouse inference). This protects data integrity while enabling large-scale analytics. [43-44][129-156][371-390][318-327]
POLICY CONTEXT (KNOWLEDGE BASE)
Secure, sovereign data control is identified as a prerequisite for trustworthy AI in public services and finance, emphasizing data-sovereignty and governance [S61].
Similar Viewpoints
All three stress that AI deployment must be underpinned by ethical principles, accountability, and human oversight to maintain trust in governance systems. [36-44][291-304][617-623]
Speakers: Shri Ravi Agrawal, Professor Mausam, Justice R. Mahadevan
New Income Tax Act 2025 creates a technology‑driven ecosystem; AI reduces interpretation disputes and litigation (Shri Ravi Agrawal) Emphasis on accountability, human‑in‑the‑loop oversight, and bias mitigation to preserve trust (Professor Mausam) Trust‑based, ethical AI operationalisation as a cornerstone of modern governance (Justice R. Mahadevan)
Both highlight the need for a sovereign, controlled AI ecosystem—through technical means (LoRA‑adapted models) and policy mechanisms (sutras, sandbox)—to ensure responsible, secure AI use in public sector. [129-156][371-390]
Speakers: T. Srinivasan, Suvendu Pati
Sovereign large language model built with LoRA‑adapted SLMs for secure, domain‑specific tax intelligence (T. Srinivasan) Seven AI “sutras” and sandbox model adopted nationally to ensure responsible AI use (Suvendu Pati)
Unexpected Consensus
Proactive, predictive use of AI to prevent wrongdoing before it occurs
Speakers: Professor Mausam, Ram Ganesh
Emphasis on accountability, human‑in‑the‑loop oversight, and bias mitigation to preserve trust (Professor Mausam) AI co‑pilot ingests FIRs, generates compliant investigative pathways, and fuses open‑source intelligence (Ram Ganesh)
While Professor Mausam discusses AI for predictive policing and crime prevention, Ram Ganesh describes an AI co-pilot that automatically generates investigative pathways from FIRs, effectively moving from reactive to proactive enforcement. Their convergence on AI as a pre-emptive tool across tax and police domains was not anticipated. [218-222][464-470][471-478]
POLICY CONTEXT (KNOWLEDGE BASE)
Proactive prevention is a key pillar of AI-Driven Enforcement and is reflected in the UK AI Action Plan, which pilots predictive tools to stop crime before it happens [S44][S46].
Overall Assessment

There is strong consensus among policymakers, technologists, and regulators that AI can materially improve tax compliance, revenue collection and law‑enforcement effectiveness, provided it is deployed within a secure, sovereign data environment, with robust ethical safeguards, human oversight and substantial capacity‑building. The alignment spans technical, regulatory and ethical dimensions, indicating a mature, multi‑stakeholder approach to AI‑enabled governance.

High – the speakers repeatedly echo the same principles across different sectors, suggesting that AI adoption in Indian public finance and enforcement is moving from experimental to operational with a clear, shared roadmap.

Differences
Different Viewpoints
Determinism versus probabilistic nature of AI for tax enforcement
Speakers: Ramesh Revuru, Ravi Agrawal, Professor Mausam
Blueverse/Bharatverse multi‑agent platform enabling rapid, deterministic AI solutions for CBDT (Ramesh Revuru) Human must drive AI rather than AI driving the human; need for accountability and safeguards (Ravi Agrawal) AI must not be fully autonomous; need for human‑in‑the‑loop, bias mitigation and safeguards to preserve trust (Professor Mausam)
All three speakers accept AI’s role in tax administration, but they diverge on how to manage its inherent probabilistic behaviour. Revuru argues for a deterministic ‘right-action’ platform built on pre-configured layers to guarantee outcomes [110-122]. Agrawal stresses that humans should remain in control of AI, warning against blind reliance and calling for capacity building and accountability [48-51][52-53]. Mausam warns that fully autonomous AI can erode public trust, emphasizing human oversight and bias mitigation [291-304]. Thus the disagreement centres on whether AI should be engineered to behave deterministically or be kept probabilistic but tightly overseen.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between deterministic compliance requirements and the probabilistic behavior of AI models is discussed in analyses of AI jargon and responsible AI frameworks [S62][S63].
Data localisation versus flexible model import for AI deployment
Speakers: T. Srinivasan, Martin Wilcox
Sovereign large language model built with LoRA‑adapted SLMs for secure, domain‑specific tax intelligence (T. Srinivasan) “Bring‑Your‑Own‑Model” approach and in‑warehouse inference delivering 25× faster scoring at scale (Martin Wilcox)
Srinivasan proposes keeping all training data and model adaptation inside a secure government environment, using LoRA to adapt small models without external data movement [129-156]. Wilcox, by contrast, advocates importing models trained on any platform (PMML, Mojo, ONIX) into the data warehouse for high-performance inference, accepting external model provenance as long as execution stays inside the warehouse [324-347]. The tension lies in the preferred balance between data sovereignty and operational flexibility.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on data localisation versus flexible data use echo the sovereign data emphasis in AI infrastructure policies, which stress control over data residency while allowing cross-border model deployment [S61].
Primary mechanism for AI‑driven compliance and enforcement
Speakers: Shashi Bhushan Shukla, Martin Wilcox, Ram Ganesh
“Nudge” campaigns prompting voluntary disclosures, yielding billions in additional revenue (Shashi Bhushan Shukla) AI‑driven risk analytics using graph‑based, multimodal data to detect financial crime at national scale (Martin Wilcox) AI co‑pilot ingesting FIRs and generating compliant investigative pathways (Ram Ganesh)
All three aim to improve compliance and reduce fraud, yet they champion different tactical approaches. Shukla focuses on behavioural nudges that encourage taxpayers to self-correct, reporting large revenue gains from foreign-asset and bogus-donation campaigns [560-583]. Wilcox emphasises sophisticated risk scoring through graph analytics and multimodal data integration to flag suspicious activity before it materialises [318-327]. Ganesh describes an AI-assisted investigative co-pilot that automates case workflow and evidence gathering for cybercrime investigations [464-482]. The disagreement is not about the end goal but about which AI-enabled toolset should be prioritised.
Unexpected Differences
Proprietary multi‑agent platform versus open sandbox for AI experimentation
Speakers: Ramesh Revuru, Suvendu Pati
Blueverse/Bharatverse multi‑agent platform enabling rapid, deterministic AI solutions for CBDT (Ramesh Revuru) Seven AI “sutras” and sandbox model adopted nationally to ensure responsible AI use (Suvendu Pati)
While both speakers advocate AI for tax administration, Revuru pushes a proprietary, purpose-built platform (Bharatverse) with pre-built layers to deliver deterministic outcomes [110-117], whereas Pati stresses an open, cross-sectoral AI sandbox that provides anonymised data and compute resources for broader experimentation [371-390]. The contrast between a closed, vendor-driven solution and an open, government-led sandbox was not anticipated given their shared focus on AI enablement.
POLICY CONTEXT (KNOWLEDGE BASE)
Sandbox initiatives in finance and IGF workshops advocate open, interoperable environments for AI testing, contrasting with proprietary platform approaches [S52][S53].
Optimism about autonomous AI code generation versus caution about autonomous AI impacts
Speakers: Ravi Agrawal, Professor Mausam
Personal anecdote of rapidly generating code with AI, illustrating speed and potential (Ravi Agrawal) Warning that autonomous AI can erode trust, cause bias, and must not replace human judgement (Professor Mausam)
Agrawal recounts developing a functional application in five-six hours using AI-generated code, portraying a highly autonomous AI capability as beneficial [53-63]. Mausam, however, cautions that autonomous AI can produce errors, bias, and loss of public confidence, insisting on human-in-the-loop safeguards [291-304]. The juxtaposition of a celebratory view of autonomous code generation with a warning about autonomous AI’s risks was unexpected.
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on generative AI agents show a split between optimism about autonomous capabilities and caution regarding risks, as highlighted in reports on autonomous agents and broader AI governance dialogues [S49][S50][S48].
Overall Assessment

The symposium displayed broad consensus that AI is essential for modernising tax administration and law enforcement, yet substantive disagreements emerged around the technical architecture (deterministic platforms vs probabilistic models, data localisation vs flexible model import) and the preferred compliance strategy (behavioural nudges, risk‑scoring analytics, or investigative co‑pilots). These divergences reflect differing priorities among industry, academia, and regulators regarding control, transparency, and scalability of AI solutions.

Moderate – while no outright conflict was voiced, the speakers presented competing visions for implementation and governance. The lack of alignment on core design choices (determinism, data sovereignty, and compliance mechanisms) could affect coordination across agencies and slow the rollout of a unified AI framework unless reconciled.

Partial Agreements
All three concur that AI should be leveraged to improve taxpayer experience and lower litigation, but they differ on the operative pathway: Agrawal stresses legislative simplification and algorithmic interpretation of the new Act [32-34]; Shukla highlights behavioural nudges that drive voluntary compliance [560-583]; Kumar points to AI‑enabled information provision and litigation‑risk tagging within the taxpayer lifecycle [88-97].
Speakers: Ravi Agrawal, Shashi Bhushan Shukla, Abhishek Kumar
New Income Tax Act 2025 creates a technology‑driven ecosystem; AI reduces interpretation disputes and litigation (Ravi Agrawal) “Nudge” campaigns prompting voluntary disclosures, yielding billions in additional revenue (Shashi Bhushan Shukla) Quick, accurate taxpayer information and AI‑based litigation risk assessment (Abhishek Kumar)
All agree on the necessity of responsible AI governance. Pati proposes a formal set of sutras and an AI sandbox for safe experimentation [371-390]; Mausam calls for human oversight and bias checks to maintain public trust [291-304]; Agrawal stresses accountability, secure data, and capacity building as prerequisites for AI adoption [36-44]. Their divergence lies in the concrete governance instruments (formal sutras and sandbox vs procedural safeguards vs capacity‑building programmes).
Speakers: Suvendu Pati, Professor Mausam, Ravi Agrawal
Seven AI “sutras” and sandbox model adopted nationally to ensure responsible AI use (Suvendu Pati) Emphasis on accountability, human‑in‑the‑loop oversight, and bias mitigation to preserve trust (Professor Mausam) Need for clear accountability, safeguards and continuous training when adopting AI (Ravi Agrawal)
Takeaways
Key takeaways
AI is moving from a conceptual tool to an operational core of tax compliance and enforcement, enabling faster, more accurate taxpayer services and reducing litigation. The new Income Tax Act 2025 establishes a technology‑driven ecosystem; AI will help interpret the law, provide deterministic outcomes, and support trust‑based voluntary compliance. Industry and academia are delivering scalable AI platforms (Blueverse/Bharatverse, sovereign LLMs with LoRA‑adapted SLMs) that can be rapidly customized for the CBDT’s specific needs. AI‑driven risk analytics (graph analytics, multimodal models, Bring‑Your‑Own‑Model) can deliver inference up to 25× faster, making real‑time fraud detection feasible at national scale. Ethical governance frameworks (the seven AI “sutras”, sandbox model, human‑in‑the‑loop, bias mitigation, privacy safeguards) are being codified and adopted across regulators. Law‑enforcement agencies (RBI, police, SEBI, CyberEye) are already using AI for mule‑account detection, investigative co‑pilots, content compliance, and cyber‑audit, showing cross‑sectoral applicability. Targeted “nudge” campaigns that use AI‑generated prompts have generated billions of rupees in additional revenue and improved taxpayer behaviour. Collaboration among government, industry, and academia is essential for data sharing, capacity building, and building a unified AI‑enabled governance ecosystem.
Resolutions and action items
Scale up Project Insight 2.0 and integrate the sovereign LLM/SLM stack for tax‑related question answering, litigation‑risk scoring and pre‑filing assistance. Deploy the AI sandbox and adopt the seven AI sutras nationally to provide a safe environment for experimentation and standardised risk‑mitigation practices. Roll out the MuleHunter AI model across all scheduled banks, integrate it with a central aggregation service, and move towards real‑time transaction scoring. Implement the Bharatverse (Indianised Blueverse) platform for multi‑agent AI solutions within the CBDT, ensuring deterministic ‘right‑action’ outcomes. Expand the Nudge (Saksham) framework to provide real‑time, pre‑filing prompts and post‑filing corrective nudges, leveraging cross‑source data validation. Strengthen inter‑agency data repositories to enable unified AI analytics (e.g., shared CRS/FATCA data, financial transaction data, law‑enforcement records). Continue capacity‑building programmes across departments (training, AI literacy, human‑in‑the‑loop protocols) as highlighted by multiple speakers. Establish a collaborative international forum (the 17‑country AI misuse group) to share AI‑assisted fraud patterns such as synthetic identities and deep‑fakes. Adopt the BYOM (Bring‑Your‑Own‑Model) approach for rapid model deployment while keeping inference within the data warehouse for performance and security.
Unresolved issues
How to systematically detect and mitigate algorithmic bias in AI models, especially given India’s diverse social strata. Managing alert fatigue and over‑triggering of AI‑generated warnings to maintain analyst responsiveness. Ensuring robust privacy safeguards while creating centralized, cross‑institutional data lakes for AI analytics. Defining clear accountability and liability frameworks for AI‑driven decisions that may affect taxpayers or citizens. Standardising model validation and explainability requirements across different agencies and AI vendors. Addressing the risk of AI systems becoming overly autonomous and making erroneous public‑facing decisions. Integrating AI tools with legacy systems and ensuring data quality/cleanliness for sovereign LLM training.
Suggested compromises
Adopt a phased, sandbox‑first approach to AI deployment, allowing experimentation while containing risk. Combine deterministic AI modules for high‑risk enforcement actions with probabilistic models for low‑risk advisory functions. Maintain human‑in‑the‑loop oversight for all critical decisions, using AI to augment rather than replace human judgment. Balance innovation with risk mitigation by following the RBI‑proposed seven sutras and six pillars, ensuring both agility and governance. Use BYOM to let teams work with familiar tools while centralising inference on secure, sovereign infrastructure.
Thought Provoking Comments
Human has to drive AI rather than AI driving the human. We must build capacity in our resources so that AI tools augment, not replace, our decision‑making.
Highlights the ethical and practical stance that AI should be an assistive technology, emphasizing capacity building and human oversight, which sets a responsible tone for the entire symposium.
Established the foundational principle for the discussion, prompting subsequent speakers to frame their solutions around human‑in‑the‑loop designs and responsible deployment.
Speaker: Shri Ravi Agrawal
We need ‘right action’ – deterministic outcomes – for CBDT. Generative AI is probabilistic, but tax enforcement requires certainty. Our Bharatverse platform provides pre‑built layers to guarantee right action in every scenario.
Introduces the concept of deterministic AI for tax administration, challenging the prevailing view that generative models are sufficient, and proposes a concrete platform (Bharatverse) to achieve it.
Shifted the conversation from generic AI adoption to the necessity of certainty in enforcement, influencing later technical discussions about sovereign LLMs and deterministic chatbots.
Speaker: Ramesh Revuru (LTI Mindtree)
AI must not be autonomous in citizen interactions; over‑triggering erodes trust, and algorithmic bias can devastate a diverse society. Human analysts must validate AI leads to maintain legitimacy.
Broadens the scope beyond tax to systemic risks of AI, emphasizing bias, over‑alerting, and civil liberties, thereby deepening the ethical dimension of the dialogue.
Prompted speakers to address safeguards, human‑in‑the‑loop mechanisms, and bias mitigation in their solutions, and reinforced the earlier ethical concerns raised by the Chairman.
Speaker: Professor Mausam
The RBI’s AI governance sutras have been adopted as India’s national AI principles, and our MuleHunter.ai system, with 857 features, now achieves 80‑90% accuracy in detecting mule accounts across 26 banks, moving from reactive to preventive action.
Provides a concrete policy framework and a successful large‑scale AI deployment, illustrating how governance and technology can combine to produce measurable outcomes.
Introduced a national-level AI governance model and a high‑impact use case, steering the discussion toward scalable, cross‑institutional collaboration and preventive analytics.
Speaker: Suvendu Pati (RBI)
Our ‘Bring Your Own Model’ capability lets any trained model be imported and run inference directly in the data warehouse, delivering up to 25× faster scoring—turning model training hype into real‑world production value.
Shifts focus from model development to deployment at scale, emphasizing the importance of in‑warehouse inference and multimodal data integration for actionable AI.
Encouraged other presenters to consider deployment architectures and performance, influencing the technical depth of subsequent talks on sovereign LLMs and real‑time scoring.
Speaker: Martin Wilcox (Teradata)
Our AI co‑pilot ingests FIRs, generates a compliant investigative path, pulls telecom and open‑source data, and guides officers step‑by‑step, reducing manual effort and improving case outcomes.
Demonstrates a practical, end‑to‑end AI workflow for law enforcement, illustrating how AI can operationalize SOPs and integrate diverse data sources.
Provided a tangible example of AI in policing, reinforcing the theme of AI‑enabled workflow automation and prompting discussion on training and adoption across agencies.
Speaker: Ram Ganesh (CyberEye)
The ‘Nudge’ initiative uses a seven‑step strategy to proactively inform taxpayers of discrepancies, resulting in 1.57 lakh foreign‑asset disclosures worth ₹99 000 crore and a 3× increase in compliance within two years.
Offers concrete evidence that AI‑driven behavioral nudges can dramatically improve voluntary compliance, linking data analytics to real fiscal outcomes.
Validated the effectiveness of AI in achieving the symposium’s goal of easier compliance and trust‑based governance, steering the final discussion toward citizen‑centric AI applications.
Speaker: Shashi Bhushan Shukla (CBDT)
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved the dialogue from abstract enthusiasm to concrete, responsible, and results‑driven AI deployment. The Chairman’s emphasis on human‑centric AI set an ethical baseline, which was deepened by Professor Mausam’s warnings about bias and over‑alerting. Technical innovators like Ramesh Revuru and T. Srinivasan responded with deterministic, sovereign models, while Martin Wilcox highlighted the necessity of scalable inference. Policy leadership from Suvendu Pati introduced a national governance framework and a high‑impact mule‑hunter use case, bridging policy and practice. Operational examples from Ram Ganesh and the Nudge outcomes presented by Shashi Bhushan Shukla demonstrated tangible benefits for law enforcement and taxpayers alike. Collectively, these comments redirected the conversation toward accountable, cross‑sectoral AI strategies that prioritize trust, effectiveness, and citizen welfare.

Follow-up Questions
How can we prevent over‑triggering of AI alerts to avoid alert fatigue and maintain trust in the system?
Professor Mausam highlighted the risk of generating too many alerts, which can lead to users ignoring them, indicating a need for research on optimal alert thresholds and filtering mechanisms.
Speaker: Professor Mausam
What methods can be employed to detect and mitigate bias in AI models used for law enforcement and tax enforcement, especially given India’s diverse social strata?
He warned that algorithmic bias could have devastating effects in a heterogeneous society, suggesting further study on bias detection and fairness techniques.
Speaker: Professor Mausam
How can a centralized, cross‑jurisdictional data repository be established to enable integrated AI analytics across government agencies?
He emphasized the current siloed data situation and the need for unified data to improve intelligence, indicating research on data governance and integration frameworks.
Speaker: Professor Mausam
What should be the design of robust human‑in‑the‑loop frameworks to ensure accountability and prevent autonomous AI errors in enforcement?
Both speakers stressed that AI should not act autonomously and must involve human oversight, pointing to a need for guidelines and system designs that embed human review.
Speaker: Professor Mausam, Shri Ravi Agrawal
How effective and scalable is the AI sandbox model for cross‑sector data sharing and innovation, and what best practices can be derived?
He mentioned the AI sandbox as a key recommendation but did not detail its implementation, indicating a need for evaluation studies.
Speaker: Suvendu Pati
What are the privacy and civil‑liberties implications of expanded AI‑driven surveillance and data collection, and how can they be mitigated?
Both raised concerns about increased surveillance and the need to protect individual rights, suggesting research into privacy‑preserving AI techniques and policy safeguards.
Speaker: Professor Mausam, Shri Shashi Bhushan Shukla
What techniques can be developed to detect synthetic identities and deep‑fake documents used in financial crimes?
He identified synthetic identities and deep‑fakes as emerging AI‑assisted misuses, calling for research into detection methods.
Speaker: Shashi Bhushan Shukla
How can real‑time transaction scoring be implemented effectively for mule‑account detection across banks?
He described a future digital payments intelligence platform for real‑time scoring, indicating a need for technical and operational research.
Speaker: Suvendu Pati
What are the performance and scalability challenges of multimodal graph analytics for AI‑driven risk assessment at India‑scale, and how can they be addressed?
He noted that graph analytics are O(N²) and challenging at scale, highlighting a research gap in efficient algorithms and infrastructure.
Speaker: Martin Wilcox
How can capacity building and democratization of AI development be fostered within regulatory bodies like SEBI?
He emphasized internal AI capacity and democratization, suggesting the need for studies on training models, governance, and cultural change.
Speaker: Avneesh Pandey
What standards and frameworks are needed for explainable and transparent AI in tax enforcement to ensure fairness and public trust?
Both discussed the importance of explainability and accountability, indicating a need for concrete standards and evaluation metrics.
Speaker: Ravi Agrawal, Professor Mausam
What is the measurable impact of AI‑driven nudges on taxpayer behavior and compliance rates, and how can these interventions be optimized?
He presented results from nudge campaigns and implied further research to refine and assess their effectiveness.
Speaker: Shashi Bhushan Shukla
How effective are AI‑driven predictive policing and crime‑forecasting tools, and what are the ethical considerations in their deployment?
He mentioned predictive use‑cases and the need for careful, ethical deployment, suggesting research into accuracy, bias, and societal impact.
Speaker: Professor Mausam
Can AI‑generated code be reliably used for internal tax department tools, and what validation processes are required?
He recounted developing code via AI in a few hours and questioned blind reliance, indicating a need for validation frameworks for AI‑generated software.
Speaker: Ravi Agrawal
How can AI chatbots be designed to assist internal tax department staff in data retrieval and analysis without requiring coding expertise?
He suggested text‑to‑coding systems for staff, highlighting a research area in user‑friendly AI interfaces for non‑technical users.
Speaker: Professor Mausam

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Scalable AI Through Global South Partnerships

Building Scalable AI Through Global South Partnerships

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session began with Sunil Wadhwani explaining that he and his brother founded the Badwani Institute for Artificial Intelligence in 2018, at a time when AI was still nascent and few resources were directed toward societal challenges [17-20]. After early setbacks they refocused on partnering with government ministries to identify priority problems, leading to a national tuberculosis (TB) initiative prompted by the health ministry’s focus on TB as the leading infectious-disease killer in India [41-44][45-48]. The institute developed a smartphone-based cough-analysis tool that instantly estimates a person’s TB risk, which has become a national standard and is the only such solution worldwide, prompting WHO to call it a potential game-changer [56-62]. They also automated sputum-test analysis across 64 government labs, cutting result turnaround to one day, and created AI models that predict which patients will abandon treatment so that 2,000 caseworkers can target the most at-risk individuals, contributing to a 25 % rise in TB detection last year [63-70][71-73]. In education, they addressed high dropout rates among primary students by deploying an AI-driven reading-proficiency suite that generates personalized exercises; after a successful pilot, the Rajasthan state mandated the tool for three million children [75-84][87-90].


From these experiences Sunil distilled key lessons: scaling must be built with government from day one, solutions need to be designed for large-scale deployment early, and they must integrate with existing digital public infrastructure such as the TB case-management system Nikshay and the school platform Rakshak [92-100][111-124]. He emphasized that tools only succeed when they make frontline health workers’ and teachers’ jobs easier, otherwise adoption stalls [125-127]. Ankur highlighted the gap between innovation and impact and noted the Gates Foundation’s new “Advantage India for AI” pledge to support such collaborations [131-136]. Sunil reported that the institute now serves over 100 million Indians annually, has built more than 25 AI platforms, and is expanding to Africa-launching operations in Rwanda, Ethiopia and Kenya and offering capacity-building for civil servants on AI governance [143-150][155-156].


Lacina Kone added that Africa’s Smart Africa initiative seeks to replicate India’s digital public-infrastructure model, citing India’s Aadhaar and UPI as examples that can inform a continent-wide digital market [198-207][211-218]. Shalini Kapoor described “AI diffusion” as sharing playbooks across borders so that solutions built in one country can be adapted elsewhere, reinforcing the South-South learning loop [179-190][191-194]. S. Krishnan outlined India’s AI Mission model, which provides low-cost compute, sovereign models, and open data pipelines, and pledged to share these resources globally, underscoring the summit’s role in democratizing AI [305-319][320-327]. He also noted the Gates Foundation’s partnership in curating sessions and establishing an international cooperation centre to help other nations implement digital public infrastructure [329-333].


The participants agreed that coordinated government engagement, frugal scaling, and cross-regional collaboration are essential to translate AI innovations into widespread societal benefit for the Global South [92-100][125-127][179-190][305-319].


Keypoints


Major discussion points


AI-driven health and education solutions in India and the importance of government partnership – Sunil described AI tools for tuberculosis (cough-based detection, automated sputum analysis, adherence prediction) that are now national standards, and an AI-based reading-proficiency suite that has been mandated for millions of children [56-70][80-88]. He emphasized that scaling only succeeded after they began working directly with ministries, planning deployment at scale from day one, and leveraging existing digital public infrastructure such as Nikshay and Rakshak [92-106][111-124]. He also warned that tools must make frontline workers’ lives easier, otherwise adoption stalls [125-127].


Key lessons learned for impact at scale – The team realized that technical brilliance alone is insufficient; success requires (1) early and humble engagement with senior civil servants, (2) designing for mass deployment (training, logistics, hardware) from the outset, and (3) integrating with government-owned platforms that provide data pipelines and distribution channels [92-106][111-124][125-127].


South-South collaboration and export of the Indian model – After impacting over 100 million Indians, the institute began fielding requests from Kenya, Rwanda, Ethiopia, Indonesia, Egypt, Mexico, etc., and set up a foundation to support global deployments [139-156][160-168]. Panelists from Smart Africa and Kala highlighted the need for shared playbooks, joint investment, and regulatory harmonisation to replicate successes across the Global South [177-194][198-226][259-267].


India’s Digital Public Infrastructure (DPI) and AI Mission as a replicable framework – The discussion highlighted Aadhaar, UPI, and sector-specific platforms (Nikshay, Rakshak) as the backbone that made AI scaling possible, and S. Krishnan outlined India’s frugal AI-mission model (low-cost compute, sovereign models, open data) that the UN and other countries can adopt [111-124][289-321][322-327].


Democratizing AI and the summit’s role in fostering inclusive participation – Both hosts and panelists stressed that the summit aimed to move AI from elite circles to “the rooms” where youth and diverse stakeholders can see and shape the technology, aligning with the “people, planet, progress” mantra and the Gates Foundation’s partnership [128-136][289-298][327-334][357-365].


Overall purpose / goal of the discussion


The conversation was designed to showcase how AI can be democratized and scaled in India to address critical health and education challenges, extract concrete lessons about government-led deployment, and use those insights to catalyze South-South collaboration. By highlighting the role of digital public infrastructure and the Gates Foundation partnership, participants aimed to build a shared playbook that other Global-South nations can adopt, thereby accelerating AI-driven development across the region.


Overall tone and its evolution


Opening (0:09-14:54) – Optimistic and celebratory, with Sunil proudly recounting breakthrough AI products and their national impact.


Mid-section (14:55-18:40) – Reflective and instructional; Ankur and Sunil shift to “lessons learned,” acknowledging earlier missteps and emphasizing humility and systematic scaling.


Panel segment (19:46-32:24) – Collaborative and forward-looking, with participants from Smart Africa, Kala, and the Indian ministry exchanging ideas on pathways, regulatory alignment, and mutual learning.


Closing (32:25-48:45) – Appreciative and inspirational, highlighting the summit’s success in bringing diverse Global-South voices together and ending on a hopeful, “we can do this together” sentiment.


The tone moves from proud reporting of achievements, through candid self-critique and strategic guidance, to a unifying, hopeful call for collective action across the Global South.


Speakers

Ankur Vora – Chief Strategy Officer and President, Africa and India Office, Gates Foundation; expertise in AI for development, global-south partnerships. [S9][S10]


Sunil Wadhwani – Co-founder & Chairman, Wadhwani Institute for Artificial Intelligence; expertise in AI-driven health, education, and social impact solutions. [transcript]

S. Krishnan – Secretary, Ministry of Electronics and Information Technology (METI), Government of India; expertise in AI policy, digital public infrastructure, and national AI mission. [S1][S2]


Shalini Kapoor – Chief Strategist, XSTEP Foundation; expertise in AI strategy, partnership building, and scaling AI solutions in the Global South. [S14]


Lacina Kone – Director General and CEO, Smart Africa; expertise in continental AI policy, digital market integration, and public-private AI collaboration. [S11][S12][S13]


Shikoh Gitau – CEO, Kala; expertise in AI implementation for health and education, private-sector AI solutions in Africa. [S6][S8]


Additional speakers:


Nandan Hillikini – (mentioned as announcing “100 Pathways to 2030”; role not specified in the transcript). [transcript]

Full session reportComprehensive analysis and detailed insights

The session opened with Ankur Vora asking Sunil Badwani to elaborate on the institute’s work in India, noting that his own speeches had highlighted AI-driven tools for oral-reading fluency and tuberculosis (TB) screening that were “just amazing” and deserved wider attention [1-7].


Sunil Badwani: He recounted how he and his brother Ramesh founded the Badwani Institute for Artificial Intelligence in 2018, when AI was still a niche field and before the advent of ChatGPT [17-20]. While serving on the Carnegie Mellon University board, he observed billions of dollars flowing into AI research [21-23] but recognised a stark gap: the majority of the world’s population lacked access to quality health care and education [24-26]. After an uninspiring early phase in which prototypes failed to scale, the institute refocused on direct collaboration with government ministries, identifying national priorities and redesigning its approach [31-38].


In health, the Ministry of Health identified TB as the country’s top infectious-disease killer, accounting for nearly two million deaths globally and half a million in India [41-44]. The institute mapped the TB cascade and pinpointed three critical bottlenecks: lack of functional X-ray equipment, slow sputum-test turnaround across 64 government labs, and poor adherence to toxic medication regimens [49-55]. Their AI responses were threefold: a smartphone-based cough-analysis app that instantly estimates a person’s TB risk and provides a probability score (now a national standard and the only solution of its kind worldwide, praised by the WHO as a potential game-changer) [56-62]; an AI model that fully automates sputum analysis, reducing result time to one day [63-66]; and predictive algorithms that flag patients likely to abandon treatment, enabling 2 000 caseworkers to focus on the most at-risk individuals and raising TB detection by 25 % in the previous year [67-73].


In education, Sunil described the pervasive dropout problem among primary-school children, especially in grades 1-5, where early ill-literacy triggers a cascade of failure, frustration and early entry into labour [75-84]. The institute built an AI-driven reading-proficiency suite that generates personalised home-reading exercises for each child. After a successful pilot, the Rajasthan state mandated the tool for three million pupils, illustrating how a targeted AI solution can achieve massive scale [85-90].


From these experiences Sunil distilled four key lessons. First, scaling is impossible without early, humble engagement with senior civil servants; the institute must approach ministries with humility and a willingness to learn [95-99]. Second, solutions must be designed for mass deployment from day 1, with explicit plans for training, logistics and hardware distribution [100-106]. Third, integration with existing Digital Public Infrastructure (DPI) is essential – the TB tool plugs into the Nikshay case-management platform and the reading suite uses Rajasthan’s Rakshak system, both of which provide data pipelines and distribution channels [111-124]. Fourth, tools must make frontline workers’ lives easier; otherwise adoption stalls, regardless of technical merit [125-127].


Ankur Vora highlighted the “innovation-to-impact” gap, noting that progress is not a straight line and that the Gates Foundation’s new “Advantage India for AI” pledge aims to fund such collaborations [131-136].


Building on the Indian successes, Sunil announced that the institute now serves over 100 million Indians annually through more than 25 AI platforms [143-150] and is expanding to the Global South. Over the past year, governments from Kenya, Rwanda, Ethiopia, Indonesia, Egypt and Mexico have requested assistance. In response, a dedicated foundation was created, a team was dispatched to Africa, and operations are slated to begin in Rwanda, Ethiopia and Kenya [154-156]. The institute’s long-term ambition is to impact 500 million people by 2040, echoing Prime Minister Modi’s call to “design in India for the world and deliver these solutions to the world” [161-168].


The panel then turned to the broader theme of AI diffusion. Shalini Kapoor framed diffusion as the creation of “rails” that allow AI solutions to travel across borders, much like the digital public infrastructure that enabled India’s rapid adoption of Aadhaar and UPI [179-190]. She argued that shared playbooks would let a solution built in Kenya be reused in India and vice-versa, reducing the need for each country to reinvent the wheel [191-194].


Lacina Kone, representing Smart Africa, expanded on this vision, describing the continent’s ambition to become a single digital market of 1.4 billion people. She noted that Africa can learn from India’s DPI successes-digital ID, UPI and large-scale election management-and that a harmonised regulatory “cloud” is the prerequisite for investment, likening finance to rain that falls only when the clouds are right [198-206][231-236]. The Smart Africa AI Council, launched in 2025, brings together ministers and private-sector leaders across 49 countries to coordinate computing, data, skills, regulation, market and investment themes [217-225].


Shikoh Gitau added that political goodwill is essential for AI to become both a technological and an economic issue. She defined the “collaboration tax” as the effort, resources, and coordination required for cross-border AI projects, and urged governments to absorb this cost [259-273]. She also cited a resonant comment from CV Madoka … CBC, which underscored the need for shared responsibility [259-273].


S. Krishnan, speaking for the Indian Ministry of Electronics and Information Technology, outlined the AI Mission model that underpins the country’s scaling success. The model provides compute at roughly one-third of global prices [305-307], sovereign AI models (the “AI Kosh” model, as we call it, the AI treasury) [327-329], and open-source data pipelines, all subsidised by the state [322-324]. He affirmed that these resources are available for sharing with the UN and other Global-South nations, and that the AI Kosh is a sovereign model built with taxpayer resources [320-327]. Krishnan stressed that democratising AI means opening the “rooms” and “halls” to youth and diverse stakeholders, allowing them to see and shape the technology directly [289-298][327-329].


In closing, Ankur thanked Sunil and highlighted the Gates Foundation’s role in curating the sessions [128-136]. Krishnan reflected that the summit succeeded in moving AI out of elite circles, placing people, planet and progress at the centre, and announced the establishment of an International Cooperation Centre under the National Institute of Smart Governance to support DPI implementation worldwide [329-334]. He concluded by reaffirming India’s commitment to open, secure and sovereign AI infrastructure [340-345].


The rapid-fire segment captured the overall sentiment: panelists expressed optimism that AI’s collaborative spirit-evident in the collective photograph of partners from Italy, Kenya, Anthropic, Google, Carnegie and the Gates Foundation-demonstrates that AI is about partnership, not competition [391-410].


Synthesis of shared pillars


1. Early, humble partnership with government ministries [95-99].


2. Design for mass-scale deployment from day 1 [100-106].


3. Embed solutions in existing DPI (Nikshay, Rakshak, etc.) [111-124].


4. Ensure tools ease frontline workers’ workflows [125-127].


These pillars, reinforced by concrete Indian case studies and the broader policy context of India’s AI Mission and Africa’s Smart Africa initiative, set a clear agenda for expanding AI-driven health and education impact across the Global South [92-106][111-124][179-190][305-311].


Session transcriptComplete transcript of the session
Ankur Vora

The first question around India. One of the things you’ve done and your organization has done is you found ways of taking the power of AI, democratizing it, and making sure it solves problems that we all care about. In my speeches, I’ve talked about you. I’ve talked about oral reading fluency, the tool whereby for less than, and if I’m stealing your thunder, sorry, but he’ll tell you a little bit more. But I’ve been talking about it because it’s just amazing. I’ve been talking about the fact that your TB screening, you can do things that we couldn’t imagine being done before. So can you tell us more about your work in India?

Sunil Wadhwani

Sure. Hi, everyone. Thank you. Welcome. Thanks for being here. Thank you for having me. I suspect the way I got over here was they needed, Gates Foundation needed someone for this chat. They looked around. They found this guy wandering around with two badges. They figured that means he’s important. Let’s get him. And next thing I’m sitting over here. But thank you so much, Ankur. So, you know, my brother and I launched the Badwani Institute for Artificial Intelligence here in India about eight years ago, 2018. Back then, AI wasn’t a thing. ChatGPT hadn’t come out. But I happened to be serving on the board of trustees of Carnegie Mellon University in the U .S., where I had studied, gotten my master’s.

So and CMU was ranked then as ranked number one in the world for artificial intelligence research and teaching. So being on the board, I could see all the billions of dollars coming into AI from Google and so on and so forth, even in those days. And it always pained me that none of this money was going into AI for society. You know, three, four billion people in the world out of eight billion don’t have access to decent health care, decent education. AI could be transformative. And that’s what we’re talking about today. But at that time, nothing was going on. So I spoke to my brother Ramesh. We decided, let’s launch this Institute for AI in India.

Prime Minister Modi came, inaugurated it, etc. So we hired a really good team of AI machine learning people, spoke to government, identified use cases, started working, and nothing happened. A couple of years went by, we were developing this, what we thought was really neat stuff, but it wasn’t scaling up. So we took a look at the issues, etc. And then we started realizing, look, we’re not approaching it quite right. We’ve got great AI solutions, but there is a lot more, a lot more to actually having impact than just having a nice technical solution. So I’ll, in a couple of minutes, tell you what we’re doing. So the key lessons that we’ve learned. But once we started figuring out, OK, what we were not doing and that we needed to be doing, then things started happening.

So just to give you two or three examples, as Ankur mentioned, we identify our problems, our challenges that we want to focus AI on by working directly with government. We talk to the health ministry about their national priorities for the next three, four, five years. What should we do? We talk to the education ministry and so on. So three years ago, the health ministry told us that tuberculosis is a very high priority for us. It’s the largest infectious disease killer in the world, kills close to two million people a year. Largest infectious disease killer in India kills close to half a million people over here. And for each person that dies, there are 20 others that don’t die, but they live miserable lives and they are infecting lots of other people as they go on.

So the government, the health ministry said, can you help? So we took a look at the whole. cascade of care in tuberculosis? What’s the patient journey like? Where are the three or four or five key pain points? And we identified, okay, diagnosis is number one, because in these economically vulnerable communities where TB happens, you need x -ray machines, you need sputum analysis, and in these communities, you don’t have all this stuff. You don’t have x -ray machines that work and are calibrated and so on. Problem number one. Problem number two, sputum analysis is another way of diagnosing TB, but these samples go to 64 government labs around India where they are ranulized, et cetera, and it takes time for the results to come back to the patient.

And for those that have TB, you’ve lost a lot of valuable time. Third big challenge is there’s a number of patients with TB who are on the medication regimen, but these are very toxic medicines. They really destroy your body while they’re trying to cure you of TB. So a lot of people stop taking these medicines. and they developed drug -resistant TB, which is much worse, 50 % mortality rate, etc. So we started applying AI to each of these issues. On the diagnosis, we’ve come up with a way of detecting tuberculosis from the sound of a cough into a smartphone. It’s instant. It’s quick. We don’t just say yes or no. We give the risk of this person having TB, what’s the probability, etc.

That is now rolling out nationally, and it is becoming the national standard. And by the way, we’re the only country that has this. It doesn’t exist anywhere. World Health Organization has told us this could be a game changer globally. For the sputum analysis, we’ve developed an AI model. So now the sputum analysis in the 64 government labs, totally automated. Results come out within a day, go back to the patient, treatment starts. Perhaps the most challenging thing. These patients who will fall off their medication. We’ve developed AI algorithms that predict well ahead of time which TB patients are likely to fall off the medication. So then the 2 ,000 TB caseworkers in India, which is a very limited number for 4 million TB patients, they can focus on the right people.

This is impacting now tens of millions of people. Just in the last year, the rate of TB detection, thanks to our cough against TB, has gone up by 25%. You may think that’s bad news, you know, higher numbers, but now we can treat these patients. We can get them on the right, you know, clinical care protocols. That’s one example. Education. Throughout the global south, there is a very high dropout rate of young children from schools, very high, in grades 1 through 5. Problem in India, problem everywhere. We got a call from a very large state government in India that said, we’ve got this issue, can you help? We sent a team in. We’ve got a call from a very large state government in India that said, we’ve got this issue, can you help?

and we had to analyze what’s causing this high dropout rate. We learned that the single biggest reason for this high dropout rate is an inability of these very young children, 7, 8, 9 years old, to be able to read If you can’t read, it affects how you do in every subject, right? Science, history, geography, you struggle, you start failing, you get frustrated, and these are, again, poor communities. Your parents say, forget school, what’s the point? Come work in the field or work in the kitchen, and that affects the rest of their lives. We’ve come up with an AI -based suite of tools that… …and for the child that goes to the teacher, and for the child, we come up with personalized exercises…

stories that they can read at home, but which help them to get better at their specific area of weakness. Each child is different. We were in pilot with the state. They were so impressed, they made it mandatory for all 3 million school kids in that state, in that age group. State of Rajasthan saw it recently. Here, in that age group. So that’s the kind of scale one can get. What’s the difference between what we were not doing in our first 2 or 3 years versus what we’re doing now? What we learned is, number one, the only way to scale is government. You have to work with government from day one. Working with government isn’t easy, right? It’s easy to say it’s challenging, it can be frustrating at times.

but you have to understand how to navigate it. How do you work with senior civil servants? You know, start with an, you know, approach them with humility, not like you have the answers. You’re trying to understand the problem. You want to work with them. Secondly, think scale from day one. You can’t develop an AI solution and then say, oh, I want to use it on one million people. There are issues you have to think in right in the beginning as to how will the scale out? How will large scale training happen in the field? How will frontline health workers or, you know, teachers or other governments? How will they use this? So thinking that way is very upfront.

And in fact, with government, what we do now is once we identify a problem, even before we work on the technical solution, we plan that deployment to scale as to what will happen. And we make government accountable for a lot of it. Just as we’re accountable for the technical side. The other really key learning has been. And that government, and this relates, Ankur, to what you were just saying, has developed a lot of digital public infrastructure. Aadhaar is like the great example that we’re all aware of, right? UPI, United Payment Interface, incredible example of that. So there are lots of things in health care, in education and agriculture where the government has developed this digital public infrastructure.

And it’s critical. We didn’t know this. This was probably our key finding. It’s critical to find a government platform that you can integrate into. So the examples I’ve given of TB, government has a wonderful platform called Nikshay. It’s like a case management system for tuberculosis patients. We’ve integrated everything. We developed algorithms into that platform. The education, this early childhood reading proficiency, each state has a platform. Rajasthan, as an example, has a platform called Rakshak. Rakshak for 70 ,000 schools, 400 ,000 teachers, 8 million students. we plugged our algorithms into that platform. So if these platforms hadn’t been there, we’d be struggling to scale any of this up. The final learning that we’ve had, and maybe this is the most important, is all these technical solutions are great at a macro level to bring down TB, to improve reading proficiency, etc.

But at the end of the day, if the person using this tool, the frontline health worker, the teacher, if it doesn’t make life easier for them, in addition to improving education for the child or healthcare for the patient, it won’t happen. You can push all you want from the top that, oh, you must use this, but there’s got to be pull. They’ve got to want to use it, and that happens only when you make life easier for them.

Ankur Vora

Thank you very much, Sunil. We’ll do the next question a little bit quickly. But I do want to just… acknowledge a few things, call out a few things. One is this journey between innovation and impact. I love what the learnings you talked about, because we keep on sometimes focusing on the innovation part, and we think that the road from innovation to impact is a straight road, and it’s not. It’s possible, it’s probable, but it’s not guaranteed, and we need to work hard at it. And so your learnings get to it. It’s also one of the reasons why we love our seven -year partnership, and hopefully it’ll be much more as we think about, as some of you know, the Gates Foundation yesterday announced a new initiative, a new pledge around AI for AI, which is Advantage India for AI.

And the idea is to make investments in India for the global south, and we’re looking forward to partnering with you. So, Sunil, one of the places where we do partner, and we’re talking about things, in fact, earlier today, we were talking about work in Ethiopia and Rwanda and Kenya. Can you talk a little bit more about… how you think about your work in the context of South -South partnership and how do you take the learnings you have from one place to the other place?

Sunil Wadhwani

Sure. So when we got started, our goal was only India, right? My brother and I, we are from India, our hearts are still here. So we weren’t thinking about any place else. But what’s happened is as our AI solutions have been scaling up in India quite dramatically, and we are today impacting over 100 million people a year, we’ve developed over 25 AI platforms in partnership with government. We’ve started over the last year getting a lot of inquiries from governments around the world in the global South saying what you’re doing in India, we need in Kenya or in Rwanda or in Indonesia or Egypt or Mexico. By the way, in India, we don’t just develop these solutions. We also do a lot of capacity building, meaning training of senior civil servants on how you can use AI.

What it’s good for, not good for, etc. We help ministries develop data governance standards, use case frameworks and so on. Then we do the actual solutions development. That’s the biggest chunk of what we do. But then we have a big deployment team. We have close to 100 people making sure that these things get deployed. So we were thinking only India. But over the last year, we started getting all these inquiries. And we finally said, look, we set up this foundation to have impact globally. So now we’ve we’ve have we sent a team out to Africa to meet with several countries. We are starting operations this month in Rwanda, Ethiopia and Kenya. And I’m glad to see a colleague over here from Smart Africa.

We will be partnering in this work. We’re very excited about that. And then beyond that, we expect to be going to a number of other places today. As I said, we’re impacting maybe 100 million people in India. Our goal is by the year 2040. To be impacting 500 million people. We are very excited about our partnership with you at the Gates Foundation. So the Prime Minister Modi, if you heard him yesterday in his speech, he gave a brilliant speech. As part of that, he said he said for the last several years, I’ve been saying make in India for the world. He said, no, I want to add to that in this age of AI design in India for the world, develop in India for the world and then deliver these solutions to the world.

And that is what we’re trying to do. That’s the evolution now thinking. And again, we’re excited, very excited about the partnership with you.

Ankur Vora

Thank you, Sunil. Are we I think we have a change of plans. Thank you so much. And Sunil, if you could please stay on stage and if I could invite our panel up. Shalini Kapoor, Chief Strategist from the XSTEP Foundation. Lacina Kone, Director General and CEO of Smart Africa. And Shikoh Gitau, CEO of Kala. Thank you.

Shalini Kapoor

yeah good good to start okay thank you so much thank you anchor. Thanks for your time uh and here we are and thanks uh Sunil for you know spending some more time with us, uh thanks Shikoh we have been meeting bumping into each other and thanks thanks Lacina thanks for being here So we’ll get into some of the discussions on the South -South collaboration that is spurring innovation and that is diffusing AI into all the sectors. AI diffusion is about the routes and the rails which need to be laid in AI the similar way digital rails were laid in the DPI time. And now they can be shared. They are playbooks. They can be shared.

And AI diffusion actually concept came from it started with generation. Jeffrey Hinton talking about it. He’s a professor in Georgetown in D .C. When he talked about that, how actually electricity was created in. Germany, but it was diffused across India, across the USA, where USA made so much of strides into it. So like electricity, AI is a general purpose technology. It’s a GPT, which is there. And between invention and impact, there’s a big layer of adoption and diffusion, which needs to be there so that AI gets diffused into society. And when it gets diffused into society, something which can be built in Kenya can come to India, something which is built in India can go to Kenya, because there are playbooks which could be leveraged.

Not everybody needs to build everything. Not everybody needs to build the entire stack. How do we learn from each other? And how does that South -South collaboration happen? That’s what is the focus. So I’ll start with the pathways. What are the pathways to scale? and I’ll start with Lesina, that you are leading Smart Africa and you help coordinate across a lot of nations, different stages of AI, somebody in pilot, somebody in production, somebody has solved data, somebody has solved language, somebody has solved voice AI. What do you think are the opportunities for the South -South collaboration in building these pathways together?

Lacina Kone

Yeah, thank you very much for inviting me. In fact, that collaboration, first of all, before we even talk about South -South, the collaboration is a sense of the creation of a Smart Africa. Because if you look at Africa through just on the Kenya, which is 50 million, Ghana, which is 30 million, or Nigeria, which is 240 million, you’re missing the point. But if you look at Africa as a 1 .4 billion people, but be able to leverage that 1 .4, you need a collaboration. and the scale. So coming from that, when you look at the continent of Africa, which is technically speaking the global south, I don’t want to get there, in the global south, and the south -south collaboration is very important because we do not need to reinvent the wheel.

India has shown the world what the DPI actually means for 1 .4 billion people. It’s working digital ID, a country which is able to organize an election for 850 million people to vote. You have to actually kudos. So we don’t need to reinvent the wheel. Africa can learn a lot from India, even with the use cases. But why use cases particularly? Because there’s a lot of similarity between India and Africa in terms of culture, in terms of value, that we all know we are into the AI, we are into digital transformation. It’s not just… You create a… But the luxury of the luxury of the society is to be able to have a population, inclusion of the population, ethical of the technology, exactly.

So coming from that, that’s one of the reasons to reboost, actually, Smart Africa Initiative. The creations of the Africa AI Council came into play where last April, on April 4th, 2025, 49 countries came together to actually sign the declaration. And subsequent to that, the AI Council came to life on November 12th, 2025 in Guinea after our board of directors, which are represented by the head of the state, announced that, accepted that we could. We had a first meeting already. Then the council consists of 15 members. is not only driven by public, but it has seven ministers coming from seven different countries and eight private sector members. Why? Because in our constitution of Smart Africa, private sector first. We do believe that the government should be creating a conducive environment for the private sector to excel.

It cannot be dominated by public sector. And underneath of the council, we have, of course, six thematic groups, mainly computing, so we can look at the collaboration of South Asian computing power. We can look at it in the data set. We can look at the skills. We can look at the regulation, which is the governance. We can look into the market and we can look at investment. And when it comes in terms of investment, something we need to know. The investments in the prior technologies, the investment cycle is too slow for the AI. Just look back 12 months ago. Where were we? And where we are today? So this is something we need to look at carefully.

We are looking at the three aspects. One, the government needs to be creating a conducive environment for private sector to chip in. The private sector needs to be executing, but they can only execute, as I said, everyone’s cry because they said finance is the issue. I always said finance is not the issue. Finance is the last thing you should think about. You know why? Because I said financing is like the rain. For rain to fall, you need certain condition of the cloud. Those clouds are the regulatory environment, the conducive environment for business private sector because private sector does not like unpredictability. So the third thing, the philanthropics. The reason why I want to speak about it, the philanthropics need to serve as a 2D risk area because these are some of the things government is the last thing to invest in a technology because they want to make sure it’s going to work.

But if you don’t throw business, it’s time for them to accelerate, to use that as a de -risking while the private sector can chip in and so on and so forth. Thank you.

Shalini Kapoor

Yeah, thank you so much. And you talked about DPI, you talked about the private sector, public coming together. It’s the entire ecosystem. And on 18th, actually, Mr. Nandan Hillikini actually announced 100 Pathways to 2030, which is a clarion call. If you ask me, it’s a clarion call for people to join to create pathways simply because, you know, if you climb Mount Everest, suppose Edmund Hillary has climbed Mount Everest. And do you think he’ll come back and he’ll say, I’m not going to tell you how I climbed? What is the route I took? Where did I go? Where did I go? Where did I not go? What did I see? He will talk about, right? He will talk about them so that it is easy for other travelers to come in.

So pathway is like that. that if someone has done the AI pathway, others should learn from it, benefit from it. So, Shiko, you were with us on the stage when you joined and you said that, you know, from the Global South, 100 Pathways to Scale, you would like to join us. Please tell us, how can that diffusion help? How can this, you know, we can work, collaborate to, you know, to get the AI use cases from pilot to production?

Shikoh Gitau

I can finish? Okay. It’s that collaboration. As I was saying, this idea of how do we bring this multiplicity of thinking together, given that we have the same exact challenges. We have challenges around, we have multiple languages. We have culture and diversity. We have things that we need to be able to work together. How do we collaborate together? And for us, the biggest takeaway is how do we make AI not just a technology, but a political and economical issue? Yeah? That was the biggest one, because the people are there. The builders were there. The researchers were there. The policy makers were there. But we need that political goodwill to be able to make this work together.

And something that CV Madoka from I’ve forgotten the organization. CBC said that struck really a chord to everybody including myself is we need to start having a conversation on what is called the collaboration tax and it’s something DG when we were coming in we were talking about I’ll define collaboration tax as this effort resources and things that you need to put together to be able to collaborate with each other. It’s what the government should be doing it’s what that political part of AI should be doing to bring the collaboration together and how do we make these people come together without the effort of I mean not the effort, the pain of collaboration and that’s what we need to be talking about because the resources, the people are there people are willing to collaborate and work together we saw it this week and as the minister said I said while you’re chasing the Guinness World Record of having the most number of people also chase for the diversity that these countries are seeing thank you so much for bringing Africa and the world to India Yeah.

Shalini Kapoor

Thank you, Shikoh. Thank you so much. We’ll take a small break in the panel discussion and we’ll have Mr. Krishnan come here and talk about how scale and collaboration can help in the South -South and what is transferable from India. I know he’s like busy and across all. So over to you, Mr. Krishnan.

Ankur Vora

I was just going to do one more thing, which is thank you, Shalini, and thank you to the panel for allowing us this small break. For those of you who don’t know, Secretary Krishnan from the Ministry of Métis is over here. He’s had probably one of the most amazing, successful weeks this week. So please join me and give him a big round of applause for. Secretary Krishnan, thank you so much for being here. As a proud Indian, I’m quite excited about the fact what happened this week. I’m also, as somebody who cares about the agenda of this global South. everything that happened in India this week put the Global South agenda right front and center.

So thank you for doing that. There were so many announcements made this week about how we’re going to make progress in the months and years to come together. Would love to welcome you to give a little bit more context of the announcements that were made and the achievements that were achieved this week. Thank you. Welcome.

S. Krishnan

Let me first apologize to the panel for having sort of stepped in abruptly, but between juggling many things going on across the summit, I think this is a very important session as far as I was concerned, because if this particular summit was about one thing, it was about the Global South. The fact that… India representing the Global South… could actually dare to host this event and also dare to host it on this scale. The one thing we were very clear about is summits thus far have basically been about country leaders. It’s been about CEOs and it’s been about some experts getting together in closed rooms and not really having the opportunity to do what or to actually showcase to people as to what the possibility of the technology is.

And this particular event gave us that opportunity. I think we were very clear that what we wanted to do was to let people into the rooms. We wanted to make sure that people, especially youth, had this opportunity to come and listen to the best minds, there are on artificial intelligence as technology. and to every possible perspective on how this technology can work for everybody. And the second thing, of course, as you’re well aware, we said people, planet, progress. And two aspects of it were very important. One was, or three, if I can. One is democratizing access to AI, all the AI infrastructure and resources. That was one key aspect. The second key aspect was including those who are not ordinarily given access to this, those who are excluded.

And the third key aspect is putting humans at the center of this process to make sure that this is a technology that works for people. And I think the prime minister was very clear and emphatic in his address yesterday where he put people, or manna, right at the heart of AI. So to enable this to happen, of course, multiple things have to happen. We have to find frugal ways to innovate. In order to make these resources available, we have to make sure that we have the resources. We believe that our own AI mission model, the India AI mission model, is one of those frugal ways in which both the compute infrastructure and the model infrastructure and the data set infrastructure can be created for each country because some of this needs to be on a specific basis for regions and for countries.

In India, we are a subcontinental scale. There are 22 official languages and many other languages which need to be taught or which need to be understood. And we understand this cultural and linguistic diversity better than any other region in the world. And we can, at a continental scale, we can contribute in that effort. That is one key element. The second key element, of course, is to create compute in a way so that it doesn’t, I mean, people are not enabled. I mean, people are not able to build moats. around it. That, you know, the implication that you need the kind of resources to do this, that nobody else can do it and only we can do it is not an approach we wanted to take.

So we created this model where the private sector is encouraged to invest. We created this model where access to it is something which the government subsidizes. In the process today, AI compute in India is available at a third of the price that it is available in the rest of the world. I think that has been the significant achievement. The United Nations asked us saying, would it be possible to, once you build it out on scale, would it be possible to share with the rest of the world? This is something that we have committed to them saying, as in when the size is adequate so that we can meet other requirements, we will be happy to share them.

We are happy to share the model even now. And the AI Kosh model, as we call it, the AI treasury is something that we are happy to share even now. we are happy to share the fact that the models that we have supported in India and which have been built out as sovereign models in India that again is technology that we are happy to share with the global south it’s something we can enable some of it is something we’ve built with our own resources so it is in a sense completely sovereign unlike in many other places it’s something that the government has paid for from taxpayer resources and we can use and the third element of course is the data sets and how they are shared now that framework is again something we can certainly share the most important thing and I think that’s what is also showcased so eloquently in the expo is the range of applications which have been created out of this and there are close to 900 startups across all those halls who have done a variety of things even in the main hall we with the Gates Foundation we have set up a lot of applications and we have set up a lot of applications and we have set up the African village which is such a showcase even to the leaders fundamentally about applications which can work there and fundamentally for people to see applications which have worked in different parts of the world which can be taken elsewhere.

So all of those are available. Those are resources which we want to share. These are resources we want to actually give. And most importantly, as I said, I think if there is one thing that this summit reflects, for the first time, we’ve actually democratized AI. We’ve shown you what democratic AI looks like when people are let into the rooms, when people are let into the halls and they can see for themselves as to how this would work. So it gives me immense pleasure that, you know, and a very, very key partner for us in all of this has been the Gates Foundation right from the very outset, right from the planning stage of how we wanted to do this.

This particular part of the set of sessions on the Global South is something, that we work closely with, curated carefully. We have put together sessions which will be relevant to this group, and we have always made sure that in addition to this, of course, on every occasion, whether it is in the space of DPI or whether it is in the space of any of the other applications, we are in a position to support it. Under one of our organizations, the National Institute of Smart Governance, we have now put in a center which is fundamentally focused on international cooperation so that they can actually provide support to other countries where DPIs are to be actually implemented, how to ground them.

And we believe that probably the most effective way of dealing with this is to actually be able to cooperate amongst ourselves. So. So that we are able to take it out. We are able to learn from each other. We are able to contribute to each other. And that is something we are now really ready to do. India knows what it is to be deprived of or denied technology. India knows what it is to actually try and work your way past it. We have managed to do that. We have managed to democratize it. We have managed to make it available to people at scale. We have tried to keep it open source. We have tried to protect it in a number of ways from cyber attacks in each of those areas.

So in this entire technology stack, there is experience. There is the way that we leapfrog different stages. So I think if we work together in the AI space, likewise, there is so much that can be accomplished. And we undertake as a nation, I think I can say with responsibility that we will have devices and we will have structures through which we can sort of deepen this cooperation. We can deepen the support. We can enable this in a number of ways and continue to stay engaged through the Gates Foundation, through the other institutions to actually make this happen. So thank you very much. Thank you to Gates Foundation for curating this particular event. And thanks to all of you for participating in this.

I mean, it’s one thing to arrange it, one thing to organize it. But another thing for all of you to actually come up here, put up with some of the inconvenience which would have been caused. India is not a very convenient country at best of times. but India is a country with spirit and India is a country which fixes things

Shalini Kapoor

Okay, we have some time with us. How much? Five minutes. And we need to have a question to Sunil. He has been working at, you know, so many interesting, I mean, I listened to the stories of Wadwaning AI. and they inspire you thoroughly. So Sunil, I’ll give it to you. I want people to hear your message that how can these work that has been done in India, how can it help Global South?

Sunil Wadhwani

So as Mr. Kone of Smart Africa said 15 minutes ago, the challenges that we have in the Global South generally, really Africa, India, other countries are similar. The values we have, more importantly, are very similar. The strengths that we bring to the table in terms of our talent, our youth, et cetera, are similar. So I think it’s mutual learning. It’s not one way. It’s not that we’ve developed great stuff in India that can just be, you know, taken over. It’s mutual learning on both sides. It’s a mutual sharing of ideas. There are lots of very good things happening in Africa and Asia. There are lots of things happening in Africa and Asia that we can learn from over here.

On the technology side, as you were saying, Shalini, we’ve been fortunate. We’ve had a government that is very pro -technology. There’s a tremendous range of digital public infrastructure that we can access in India that enables, that provides data pipelines. It provides digital distribution systems without which none of this AI can scale. There’s been a very clear regulatory framework for AI that’s been developed in India that really helps. And most important, there’s an openness in government. And I think it’s driven by the prime minister’s vision and belief that technology and AI can truly transform societal development. So those, to me, are the big things, more than individual AI solutions that really make a difference. And I see that happening in Africa in many countries.

Shalini Kapoor

Sure. Thank you so much. I think we are literally… We are literally at the eve of the summit getting over. It’s been a fantastic week. meeting the best of the people, listening to the best of the sessions, navigating the traffic, yes. But like Secretary Krishnan said that, you know, we fix everything. So I just want to have one last question to each of the panelists. What’s the best thing you liked of the summit? And to you, Sunil, first. I mean, one moment, one feeling that you will carry forward.

Sunil Wadhwani

I will give you a counterintuitive answer. AI is making the world move faster and faster and faster. And all the traffic challenges we’ve had over here are teaching us patience. You will get there. Things will happen. Life will go on.

Shalini Kapoor

Thank you so much. Shikol, what’s the one feeling you’ll travel back with, back to Africa?

Shikoh Gitau

I think my best moment is, and I’m going to pick and be selfish, in the moment in Oberoi when I stood and saw this diverse sea of faces, I think about 300 people, and we’re all celebrating like, we can do this as the Global South. This is happening in our TAF, and this belief that the Global South has something to offer into this AI conversation. So it’s no longer a two -horse race, it’s a multiple -horse race. Thank you.

Lacina Kone

For me, it’s, you know, our vision is to transform Africa into a single digital market. Coming here this time, it shows me already our future. Having 1 .4 billion people on the one regulation. How does it feel like? So you know what I’m talking about. That’s one of our obstacles, says regulatory harmonization in India, you do not have that. multicultural, you have a multicultural multilingual, you have multicultural what does it feel like, including the traffic in the morning as well of course thank you

Shalini Kapoor

And my best moment for the summit was that back in Oberoi on 18th evening, several partners across Italy, Kenya Anthropic, Google Carnegie ORF, Gates Aikstep stood next to Nandini, Kenya and we all came together for 100 pathways till 2030 and we all were together and we were not doing non -collaboration pictures which is going on in Insta, we were all together so it was AI is about collaboration not competition, that’s the theme thank you thank you really enjoyed thank you thank you thank you you are the best man you are the best it’s a pleasure more photos more photos Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (38)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“While serving on the Carnegie Mellon University board, he observed billions of dollars flowing into AI research.”

The knowledge base notes that, as a board member, he could see billions of dollars pouring into AI from companies like Google, confirming his observation [S15].

Additional Contexthigh

“The Ministry of Health identified TB as the country’s top infectious‑disease killer, accounting for nearly two million deaths globally and half a million in India.”

WHO data cited in the knowledge base confirms TB as the leading infectious-disease killer worldwide, though the exact death figures differ from the claim; the source provides the broader context of TB’s impact [S116].

Confirmedmedium

“The institute mapped the TB cascade and pinpointed three critical bottlenecks: lack of functional X‑ray equipment, slow sputum‑test turnaround across 64 government labs, and poor adherence to toxic medication regimens.”

The knowledge base lists the same three challenges-limited X-ray machines, costly and time-consuming sputum analysis, and toxic drug regimens leading to poor adherence-supporting the institute’s identified bottlenecks [S115].

Additional Contextmedium

“The TB AI tool plugs into the Nikshay case‑management platform, a Digital Public Infrastructure.”

A related source describes a government DPI called Nixia (likely referring to Nikshay) that serves as a patient-management system for TB data, providing context that such integration with a DPI exists [S5].

External Sources (117)
S1
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -S. Krishnan- Role/Title: Secretary of METI (Ministry of Electronics and Information Technology)
S2
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-ai-cybersecurity-_-india-ai-impact-summit — Sri S. Krishnan, Secretary, Ministry of Electronics and IT, my dear friend, Professor Ravindran, Excellencies, distingui…
S3
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — Sorry, could I make a quick announcement to have all the panelists and the speakers on the stage for a quick photo? Mr. …
S4
Keynote-Vishal Sikka — -Honorable Ashwini Vasanthaji: Role/Title: Minister, Ministry of IT; Area of expertise: Information Technology -Sunil: …
S5
AI for Social Good Using Technology to Create Real-World Impact — – James Manyika- Sunil Wadhwani – Sangbu Kim- Sunil Wadhwani
S6
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — **Shikoh Gitau**, CEO of KALA, participated virtually and brought private sector perspectives. Her pointed question abou…
S7
IGF 2025: Africa charts a sovereign path for AI governance — African leaders at theInternet Governance Forum (IGF) 2025 in Oslocalled for urgent action to build sovereign and ethica…
S8
What is it about AI that we need to regulate? — InWS #214, Shikoh Gitau asked:”But who is drafting these policies? What agenda do they have? Do they have Africa at hear…
S9
Responsible AI for Shared Prosperity — -Ankur Vora- Chief Strategy Officer and President of the Africa and India Office at the Gates Foundation -Co-Moderator-…
S10
Keynote-Ankur Vora — “AI is not a leap into the unknown for India. It is the next chapter in a journey of building solutions that serve every…
S11
Open Forum #47 Demystifying WSis+20 — – **Lacina Kone** – CEO and Director General of Smart Africa, a Pan-African organization based in Kigali – **UNKNOWN** …
S12
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — **Lacina Kone**, Director General and CEO of Smart Africa, provided a continental perspective that proved influential th…
S13
What policy levers can bridge the AI divide? — – **Lacina Kone**: Director General and Chief Executive Officer, Smart Africa LJ Rich: H.E. Dr. Tatenda Anastasia Mavat…
S14
https://dig.watch/event/india-ai-impact-summit-2026/building-scalable-ai-through-global-south-partnerships — Thank you, Sunil. Are we I think we have a change of plans. Thank you so much. And Sunil, if you could please stay on st…
S15
Building Scalable AI Through Global South Partnerships — – Sunil Wadhwani- Shalini Kapoor
S16
Safe and Responsible AI at Scale Practical Pathways — – Shalini Kapoor- Ashish Srivastava
S17
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — And accessibility has to be also broadened in terms of multi -modality and also, where necessary, include a human in the…
S18
Building Indias Digital and Industrial Future with AI — So I think we are in a very good place. We have got very robust infrastructure. And how do we now navigate this world of…
S19
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — India possesses many essential ingredients for AI success: a robust software services industry, thriving startup ecosyst…
S20
Ad Hoc Consultation: Thursday 8th February, Morning session — Advocating for technology transfer positions India as a proponent for advancing Sustainable Development Goal 9, which em…
S21
Leading in the Digital Era: How can the Public Sector prepare for the AI age? — Tawfik Jelassi:Thank you, Pratik. Good morning, all excellencies, esteemed guests, vicious participants. I’m very please…
S22
FOREWORD — First, the public sector needs to be aware of the potential of AI and related emerging technologies to de…
S23
Empowering Civil Servants for Digital Transformation | IGF 2023 Open Forum #60 — Audience:Yeah, thank you. I’m Odas, CEO of Digital Muganda. So I’m coming more from a private sector perspective. I thin…
S24
Leaders TalkX: Securing the Digital Realm: Collaborative Strategies for Trust and Resilience — Marash Dukaj:Thank you for this exciting question. First of all, their colleagues, ministers, ladies and gentlemen, I am…
S25
The Global Power Shift India’s Rise in AI &amp; Semiconductors — But also private capital, which within this week, the numbers that I’m hearing is more than 100 billion dollar plus comm…
S26
https://dig.watch/event/india-ai-impact-summit-2026/building-climate-resilient-systems-with-ai — And part of my work has been in energy. Part of my work has been in the built environment. Thank you. but I’m representi…
S27
Transforming Health Systems with AI From Lab to Last Mile — A major announcement during the session revealed a groundbreaking collaborative funding initiative between three organis…
S28
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — – **Infrastructure Sharing and Cooperative Models**: Multiple speakers advocated for shared computing infrastructure (re…
S29
Building Population-Scale Digital Public Infrastructure for AI — “These systems need to be auditable.”[58]. “there is this urgency to get things done and that might make one think very …
S30
Host Country Open Stage — High level of consensus on fundamental principles despite working in different domains. This suggests emerging best prac…
S31
Building a Global Partnership for Responsible Cyber Behavior | IGF 2023 Launch / Award Event #69 — This initiative aims to support these organisations in preventing and responding to cyber attacks effectively. Understan…
S32
Resilient infrastructure for a sustainable world — – **Importance of Partnerships and Open Collaboration**: All speakers stressed that no single organization can address t…
S33
AI as critical infrastructure for continuity in public services — Excellent question. Thank you so much for that. Good afternoon, everybody. Thank you for all the comments. So we’ve been…
S34
A digital public infrastructure strategy for sustainable development – Exploring effective possibilities for regional cooperation (University of Western Australia) — On a positive note, the potential for South-South cooperation and shared learning experiences in the field of DPI was hi…
S35
WS #82 A Global South perspective on AI governance — AUDIENCE: Thank you for the wonderful thought provoking conversation. I wanted to ask, I only attended half of the ses…
S36
Collaborative AI Network – Strengthening Skills Research and Innovation — Garg frames AI itself as a possible digital public infrastructure that must be trusted, interoperable and shareable, dra…
S37
AI for agriculture Scaling Intelegence for food and climate resiliance — It is being designed as a replicable public infrastructure model for India and the entire global south. In partnership w…
S38
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — The AI Impact Summit held in New Delhi brought together ministers and senior officials from multiple countries for discu…
S39
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — By engaging users and technical communities, policymakers can gain valuable insights and perspectives, ultimately leadin…
S40
Democratizing AI: Open foundations and shared resources for global impact — Bernard Maissen: Yes, thank you. Hello, everybody, dear panelists. Nina, thank you for giving me the floor. In the globa…
S41
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — We are committed to work together on this through knowledge sharing, co -operation and collaboration. creation and capac…
S42
https://dig.watch/event/india-ai-impact-summit-2026/building-scalable-ai-through-global-south-partnerships — Where were we? And where we are today? So this is something we need to look at carefully. We are looking at the three as…
S43
Policy Guidelines — line 1 , disseminating the fruits of its research and scholarship as widely as possible: The intention of the policy is …
S44
Meeting REPORT — The neutrality of the sentiment expressed in this context suggests an acceptance of the policy’s intent without explicit…
S45
Technology Rewiring Global Finance: A Panel Discussion Summary — In one day, there was maximum withdrawal of $7 billion equivalent of assets from Binance.com. No issues. In that week, t…
S46
Press Conference: Closing the AI Access Gap — Private sector can help deliver progress on sustainable development goals The goal is to move from a narrative to actio…
S47
Multistakeholder Partnerships for Thriving AI Ecosystems — The panel revealed sophisticated understanding of how different stakeholders must collaborate whilst maintaining distinc…
S48
Building Scalable AI Through Global South Partnerships — “One, the government needs to be creating a conducive environment for private sector to chip in”[33]. “We do believe tha…
S49
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S50
AI for Democracy_ Reimagining Governance in the Age of Intelligence — “Global governance of AI is a precursor for a democratic development and evolution.”[1]. “So the way to democratize thes…
S51
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — Power is accumulating rapidly in the hands of those at the forefront of AI development. A handful of technology corporat…
S52
Democratizing AI Building Trustworthy Systems for Everyone — And so there are different in quotes, markets here at UL. People who can pay at different levels. Even within a country …
S53
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — Compute infrastructure and research talent shortages present bigger obstacles than regulatory constraints Data residenc…
S54
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — It is important to address this lack of diversity to ensure that AI systems are fair, inclusive, and do not perpetuate b…
S55
Governments and Technical Community: A Successful Model of Multistakeholder Collaboration for Achieving the SDGs — This comment articulated the bidirectional nature of learning required for effective collaboration, moving beyond the co…
S56
Accelerating Structural Transformation and Industrialization in Developing Countries: Navigating the Future with Advanced ICTs and Industry 4.0 — This comment introduces a critical geopolitical dimension often overlooked in technical discussions. It highlights how d…
S57
Empowering Inclusive and Sustainable Trade in Asia-Pacific: Perspectives on the WTO E-commerce Moratorium — To ensure successful integration, bridging the gap between academia and industry is essential. Due to the rapid advancem…
S58
Ad Hoc Consultation: Thursday 8th February, Morning session — Egypt aligns with South Africa and Peru on technology transfer inclusion. Egypt aligns with statements by South Africa …
S59
Democratizing AI: Open foundations and shared resources for global impact — The ‘sovereign-able’ approach allows countries and organizations to build upon open foundation models while maintaining …
S60
Global AI Policy Framework: International Cooperation and Historical Perspectives — -Sovereignty vs. Openness in AI Development: The concept of “open sovereignty” emerged as a key theme – the idea that co…
S61
Discussion Report: Sovereign AI in Defence and National Security — Faisal responds to concerns about competing global AI policies by arguing that the sovereign AI framework is adaptable t…
S62
Taxing Tech Titans: Policy Options for the Global South | IGF 2023 WS #443 — Overall, the analysis highlights the multifaceted nature of international taxation and the complex considerations involv…
S63
TAX COOPERATION POLICY BRIEF — On March 2019, the PCT launched the First Global Conference on the Platform for Collaboration on Tax Report. 10 The Con…
S64
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — I mean, access to compute is what makes or breaks a startup. So the way in India, the way I see it, the way we have star…
S65
AI as critical infrastructure for continuity in public services — Data is siloed, data is not ready for AI scale. There’s no governance built around data. And that’s why POCs, you use a …
S66
Building Population-Scale Digital Public Infrastructure for AI — Combine urgency of deployment with systematic safety frameworks, using DPI infrastructure as foundation for safe scaling
S67
Building Scalable AI Through Global South Partnerships — And we make government accountable for a lot of it. Just as we’re accountable for the technical side. The other really k…
S68
https://dig.watch/event/india-ai-impact-summit-2026/building-scalable-ai-through-global-south-partnerships — And we make government accountable for a lot of it. Just as we’re accountable for the technical side. The other really k…
S69
AI for Social Good Using Technology to Create Real-World Impact — Sunil Wadhwani shared concrete examples from Wadhwani AI’s work, including AI systems that diagnose tuberculosis from co…
S70
AI health tools need clinicians to prevent serious risks, Oxford study warns — The University of Oxfordhas warnedthat AI in healthcare, primarily through chatbots, should not operate without human ov…
S71
Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65 — Cooperation, sharing of technology, and learning are important for effective implementation at scale. Changing mindsets …
S72
WS #271 Data Agency Scaling Next Gen Digital Economy Infrastructure — This challenges traditional hierarchies of expertise in technology development and advocates for genuine inclusion of di…
S73
AI for agriculture Scaling Intelegence for food and climate resiliance — “We will move from pilots to platforms, from fragmented data to interoperable systems, from experimentation to execution…
S74
Resilient infrastructure for a sustainable world — – **Importance of Partnerships and Open Collaboration**: All speakers stressed that no single organization can address t…
S75
A digital public infrastructure strategy for sustainable development – Exploring effective possibilities for regional cooperation (University of Western Australia) — On a positive note, the potential for South-South cooperation and shared learning experiences in the field of DPI was hi…
S76
Collaborative Innovation Ecosystem and Digital Transformation: Accelerating the Achievement of Global Sustainable Development Goals (SDGs) — John OMO: Thank you very much, Mohamed. I really appreciate being here. Context. Africa has SMEs contributing conservati…
S77
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Abhishek Agarwal: Yeah, like what we need to do, like a lot has already been spoken, and I would say that if I have to l…
S78
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — And accessibility has to be also broadened in terms of multi -modality and also, where necessary, include a human in the…
S79
Building Indias Digital and Industrial Future with AI — -India’s Global DPI Model and Knowledge Transfer: The discussion highlighted India’s role in sharing its DPI framework g…
S80
Open Forum #30 High Level Review of AI Governance Including the Discussion — Abhishek Singh, Under-Secretary from the Indian Ministry of Electronics and Information Technology, emphasised that oper…
S81
WS #43 States and Digital Sovereignty: Infrastructural Challenges — The speaker mentions India’s national ID system (Aadhaar) and payment system (UPI) as examples of DPI enhancing sovereig…
S82
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — The AI Impact Summit held in New Delhi brought together ministers and senior officials from multiple countries for discu…
S83
Democratizing AI: Open foundations and shared resources for global impact — Bernard Maissen: Yes, thank you. Hello, everybody, dear panelists. Nina, thank you for giving me the floor. In the globa…
S84
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — There is an acknowledgement of the need for more alignment and coordination in the field of AI regulation. Efforts are b…
S85
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — We are committed to work together on this through knowledge sharing, co -operation and collaboration. creation and capac…
S86
Democratizing AI Building Trustworthy Systems for Everyone — Financial mechanisms | Artificial intelligence | Capacity development Role of Philanthropy and Public‑Private Partnersh…
S87
AI Innovation in India — The tone was consistently celebratory, inspirational, and optimistic throughout the discussion. Speakers expressed pride…
S88
AI for food systems — The tone throughout the discussion was consistently formal, optimistic, and collaborative. It maintained a ceremonial qu…
S89
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — The tone is consistently optimistic, collaborative, and forward-looking throughout the discussion. Speakers emphasize “l…
S90
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — The discussion maintained a consistently optimistic and collaborative tone throughout. Speakers expressed enthusiasm and…
S91
AI for Good Innovation Factory Grand Finale 2025 — The discussion maintained a consistently positive, encouraging, and professional tone throughout. It began with exciteme…
S92
https://dig.watch/event/india-ai-impact-summit-2026/building-trustworthy-ai-foundations-and-practical-pathways — I give this example because I’m fairly confident that when you look it up and when you try it yourself it will work. And…
S93
https://dig.watch/event/india-ai-impact-summit-2026/how-nonprofits-are-using-ai-based-innovations-to-scale-their-impact — There is a mentor talking to them. So we thought we’ll improve the student report first, but that didn’t work out so wel…
S94
https://dig.watch/event/india-ai-impact-summit-2026/keynote-ankur-vora — complex ones are referred appropriately and millions of lives are saved. AI will not just speed up innovation. It can he…
S95
https://dig.watch/event/india-ai-impact-summit-2026/leveraging-ai4all_-pathways-to-inclusion — And so we thought the biggest use case, the biggest investment would be on making the speakers better for Ray -Ban Meta …
S96
Keeping up with Smart Factories / DAVOS 2025 — The overall tone was optimistic and forward-looking. Panelists were enthusiastic about the potential of smart factory te…
S97
WS #283 AI Agents: Ensuring Responsible Deployment — The discussion maintained a balanced, thoughtful tone throughout, combining cautious optimism with realistic concern. Pa…
S98
Indias Roadmap to an AGI-Enabled Future — The discussion maintained an optimistic and ambitious tone throughout, with speakers expressing confidence in India’s ab…
S99
Regional perspectives on digital governance | IGF 2023 Open Forum #138 — There has been successful collaboration with stakeholder in Africa and other sub-regional activities
S100
Launch / Award Event #78 Digital Governance in Africa: Post-Summit of the Future — The tone was largely informative and collaborative, with panelists sharing research findings, policy perspectives, and r…
S101
World Economic Forum Annual Meeting Closing Remarks: Summary — These key comments transformed what could have been a standard ceremonial closing into a meaningful reflection on the ph…
S102
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S103
Closing Session  — Distinguished colleagues, as we come to the close of the summit, I want to particularly say how grateful I am for the op…
S104
Closing Ceremony — As the host country representative, Aukrust highlighted Norway’s commitment to digital inclusion through “investing in d…
S105
Powering AI Global Leaders Session AI Impact Summit India — -Prime Minister: (mentioned as having spoken the day before, but did not speak in this transcript) -Speaker: Role/title…
S106
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — And just this week, you know, the first two -nanometer chip in India has been designed by our team. I thank Mr. Vesnav t…
S107
ChatGPT: A year in review — As ChatGPT turns one, the significance of its impact cannot be overstated. What started as a pioneering step in AI has s…
S108
https://dig.watch/event/india-ai-impact-summit-2026/ai-2-0-reimagining-indian-education-system — If we take AI out of Western knowledge, if we promote it in Indian knowledge, Indian context, Indian languages, then we …
S109
https://dig.watch/event/india-ai-impact-summit-2026/ai-2-0-the-future-of-learning-in-india — If we take AI out of Western knowledge, if we promote AI in Indian knowledge, Indian context, Indian languages, then we …
S110
AI in Africa: Beyond the algorithm — Global connectivity divide Kate highlights the fundamental infrastructure gap where close to half of the world’s popula…
S111
Global Digital Governance &amp; Multistakeholder Cooperation for WSIS+20 — – Gitanjali Sah- Thibaut Kleiner- Participant Provides specific statistic of 2.6 billion people without internet access…
S112
Opening Ceremony — Chami highlighted the gap between basic Internet access statistics and meaningful connectivity, arguing that while 68% o…
S113
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — Alan Paic:Thank you very much, Inma. And it is my pleasure to address you today. I will give an introduction to GPAI as …
S114
UNSC meeting: UNSC Conflict prevention: A New Agenda for Peace — In this speech, India’s representative addresses the complex nature of conflict prevention and peacekeeping in today’s w…
S115
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-good-using-technology-to-create-real-world-impact — First one is diagnosis and diagnosing TB in economically vulnerable communities isn’t easy. X -ray machines, sputum anal…
S116
The Sustainable Development Goals Report 2019 — Tuberculosis remains a leading cause of poor health and death worldwide. An estimated 10 million people fell ill with th…
S117
The Millennium Development Goals Report 2015 — The tuberculosis (TB) incidence rate has been falling in all regions since 2000, declining by about 1.5 per cent per yea…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Sunil Wadhwani
7 arguments162 words per minute2517 words932 seconds
Argument 1
TB detection and treatment cascade AI
EXPLANATION
Sunil describes how AI is used to improve tuberculosis detection and management in India, covering diagnosis, lab processing, and treatment adherence. The AI tools provide rapid, probabilistic assessments and automate laboratory workflows to speed up care.
EVIDENCE
He explains that the AI can detect TB from the sound of a cough using a smartphone, delivering instant risk probabilities and becoming the national standard, with the World Health Organization calling it a potential game-changer [56-63]. He also notes that an AI model now automates sputum analysis in 64 government labs, delivering results within a day [64-66]. Finally, AI algorithms predict which patients will drop off medication, allowing 2,000 caseworkers to focus on high-risk individuals, impacting tens of millions and raising detection rates by 25% in the last year [68-73].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-driven TB detection from cough sounds and lab automation are documented in AI for Social Good case studies and in the Global South partnerships report, confirming rapid, probabilistic assessments and lab workflow automation [S5] [S15].
MAJOR DISCUSSION POINT
Health AI scaling
Argument 2
AI‑based early reading proficiency tools to reduce school dropout
EXPLANATION
Sunil outlines an AI‑driven suite that assesses early reading skills and delivers personalized exercises to young learners, aiming to curb high dropout rates in primary school. The solution is being scaled across millions of children after successful pilots.
EVIDENCE
He identifies the inability of children aged 7-9 to read as the main cause of dropout and describes an AI-based tool that creates personalized reading exercises for each child, delivering stories they can practice at home [80-86]. After a pilot with a state, the program was made mandatory for 3 million school-age children in Rajasthan, demonstrating large-scale adoption [87-90].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The reading-improvement system scaling to tens of millions of children and its impact on dropout rates are described in the AI for Social Good overview and the Global South partnerships analysis [S5] [S15].
MAJOR DISCUSSION POINT
Education AI intervention
Argument 3
Early government engagement, scalability planning, integration with Nikshay and Rakshak platforms
EXPLANATION
Sunil emphasizes that successful AI impact requires partnering with government from the outset, aligning with national priorities, and embedding solutions into existing public digital platforms. This approach ensures rapid scaling and accountability.
EVIDENCE
He recounts working directly with the health and education ministries to identify priority use cases such as TB and early reading, and then integrating AI algorithms into the government’s Nikshay TB case-management system and the Rakshak education platform used by 70 000 schools and 8 million students in Rajasthan [37-41][118-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The integration of AI models into India’s Nikshay TB case-management system and the Rakshak education platform is highlighted in the Global South partnerships briefing [S15].
MAJOR DISCUSSION POINT
Government partnership for scaling
AGREED WITH
S. Krishnan, Lacina Kone, Ankur Vora
Argument 4
India’s digital ID, UPI, and other public infrastructure enable AI at scale
EXPLANATION
Sunil points to India’s established digital public goods—Aadhaar and UPI—as foundational infrastructure that facilitates the deployment of AI solutions across sectors. These systems provide identity verification and payment mechanisms essential for large‑scale AI services.
EVIDENCE
He cites Aadhaar as a great example of digital ID and UPI as an incredible digital payments infrastructure, highlighting their role in enabling AI applications in health, education, and agriculture [111-114].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Aadhaar’s digital ID role and UPI’s payments infrastructure as foundations for large-scale AI services are discussed in the digital public infrastructure commentary [S14] and the partnerships report [S15].
MAJOR DISCUSSION POINT
Digital public infrastructure
AGREED WITH
Lacina Kone, S. Krishnan
Argument 5
Mutual learning, not one‑way technology transfer, leveraging similar challenges
EXPLANATION
Sunil argues that AI challenges in the Global South are shared, and collaboration should be a two‑way exchange of ideas rather than a simple export of Indian solutions. Both regions can learn from each other’s experiences.
EVIDENCE
He notes that the challenges and values across Africa, India, and other countries are similar, emphasizing mutual learning and shared ideas rather than one-way technology transfer [363-370].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The report on building scalable AI emphasizes a two-way learning approach among Global South nations and cites technology-transfer frameworks aligned with SDG 9 and 17 [S15] [S20].
MAJOR DISCUSSION POINT
South‑South knowledge exchange
AGREED WITH
Shalini Kapoor, Lacina Kone, S. Krishnan, Shikoh Gitau
Argument 6
Capacity‑building for civil servants and large deployment teams to ensure rollout
EXPLANATION
Sunil describes how his organization trains senior government officials on AI use, develops data‑governance standards, and maintains a sizable deployment team to operationalize solutions at scale. This capacity work underpins successful implementation.
EVIDENCE
He mentions training senior civil servants on AI capabilities and limitations, helping ministries develop data-governance standards, and maintaining a deployment team of close to 100 people to ensure rollout [145-151].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Capacity-building for public-sector staff and the need for civil-service AI readiness are outlined in the public-sector AI preparation brief and the capacity-building foreword [S21] [S22].
MAJOR DISCUSSION POINT
Institutional capacity building
Argument 7
Patience and resilience amid rapid AI pace and logistical challenges
EXPLANATION
Sunil reflects on the fast‑moving nature of AI development and the practical challenges (such as traffic) faced during the summit, suggesting that patience is essential for progress. He frames these obstacles as teaching moments.
EVIDENCE
He states that AI is making the world move faster and that traffic challenges in India are teaching patience, assuring that things will happen despite logistical hurdles [391-395].
MAJOR DISCUSSION POINT
Personal resilience in AI deployment
AGREED WITH
S. Krishnan, Shalini Kapoor, Ankur Vora
A
Ankur Vora
2 arguments92 words per minute584 words377 seconds
Argument 1
Innovation‑to‑impact gap highlighted
EXPLANATION
Ankur points out that moving from an innovative AI prototype to real‑world impact is not a straight line; it requires deliberate effort and partnership. He praises Sunil’s learnings as examples of bridging this gap.
EVIDENCE
He remarks that the journey between innovation and impact is not straight, noting that while it is possible, it is not guaranteed and requires hard work, referencing Sunil’s lessons as illustrative [131-134].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The gap between AI prototypes and real-world impact, and the need for deliberate partnership, is a central theme of the Global South partnerships analysis [S15].
MAJOR DISCUSSION POINT
Bridging innovation and impact
AGREED WITH
Sunil Wadhwani, S. Krishnan, Shalini Kapoor
Argument 2
Gates Foundation partnership fuels AI for impact initiatives
EXPLANATION
Ankur highlights the long‑standing partnership with the Gates Foundation and a new initiative, Advantage India for AI, which will channel investments into AI solutions for the Global South. This collaboration is positioned as a catalyst for scaling impact.
EVIDENCE
He mentions the seven-year partnership with the Gates Foundation and the newly announced Advantage India for AI pledge, which aims to invest in AI for the Global South and anticipates further collaboration with Sunil’s team [135-137].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ankur Vora’s role at the Gates Foundation and the foundation’s AI for impact collaborations are documented in the summit keynote and profile summaries [S10] [S9].
MAJOR DISCUSSION POINT
Philanthropic partnership
S
S. Krishnan
2 arguments170 words per minute1499 words526 seconds
Argument 1
India AI Mission model: subsidised compute, sovereign models, open‑source data for the Global South
EXPLANATION
Krishnan outlines India’s AI Mission, which provides low‑cost compute, sovereign AI models, and open data to enable other countries to adopt AI. The model is framed as a frugal, publicly‑subsidised approach that can be shared globally.
EVIDENCE
He describes a model where AI compute in India is offered at a third of the global price, sovereign models built with taxpayer resources, and an open-source data framework that can be shared with the UN and other Global South nations, emphasizing the AI Kosh model and willingness to share it now [305-319][320-328][329-344].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s AI Mission offering low-cost compute, sovereign models, and open-data frameworks for sharing with other countries is described in the scalable AI partnerships report and the shared infrastructure discussion [S15] [S28].
MAJOR DISCUSSION POINT
National AI infrastructure for global sharing
AGREED WITH
Sunil Wadhwani, Lacina Kone
Argument 2
Democratizing AI through inclusive participation, youth engagement, and open access
EXPLANATION
Krishnan stresses that true AI democratization means opening the rooms and halls of AI events to a broad audience, especially youth, so they can see and understand AI applications firsthand. He frames this as a core achievement of the summit.
EVIDENCE
He notes that the summit allowed people, especially youth, to enter rooms and listen to leading AI minds, emphasizing inclusive participation, and declares that this reflects the democratization of AI [289-298][327-329].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Inclusive AI events that welcome youth and broaden participation are highlighted in the Global South partnerships briefing and the open-forum on AI policy pathways [S15] [S28].
MAJOR DISCUSSION POINT
Inclusive AI engagement
AGREED WITH
Sunil Wadhwani, Shalini Kapoor, Ankur Vora
S
Shalini Kapoor
2 arguments113 words per minute923 words488 seconds
Argument 1
AI diffusion as shared playbooks and cross‑country learning
EXPLANATION
Shalini describes AI diffusion as the creation of reusable playbooks and digital rails that allow solutions to move between countries, similar to how electricity spread historically. She argues that shared knowledge accelerates adoption across the Global South.
EVIDENCE
She explains that AI diffusion involves laying routes and rails, creating playbooks that can be shared, referencing Geoffrey Hinton’s analogy to electricity’s spread from Germany to India and the USA, and stresses that this enables cross-country learning [179-190].
MAJOR DISCUSSION POINT
Knowledge sharing mechanisms
AGREED WITH
Sunil Wadhwani, Lacina Kone, S. Krishnan, Shikoh Gitau
Argument 2
Emphasis on collaboration over competition as the summit’s key takeaway
EXPLANATION
Shalini highlights that the summit demonstrated AI as a collaborative effort rather than a competitive race, citing the joint announcement of 100 pathways to 2030 and the collective presence of partners from various regions. She frames this collaborative spirit as the main lesson.
EVIDENCE
She recounts that partners from Italy, Kenya, Anthropic, Google, Carnegie, Gates, and others gathered at the Oberoi event to launch 100 pathways to 2030, emphasizing that AI is about collaboration not competition [408-410].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The summit’s collaborative spirit, including the joint launch of 100 pathways to 2030, is emphasized in the partnership report and the collaborative strategies forum summary [S15] [S24].
MAJOR DISCUSSION POINT
Collaboration as core message
AGREED WITH
Sunil Wadhwani, S. Krishnan, Ankur Vora
L
Lacina Kone
3 arguments159 words per minute807 words303 seconds
Argument 1
Smart Africa AI Council: multi‑nation coordination, private‑sector‑led ecosystem
EXPLANATION
Lacina outlines the formation of the Africa AI Council, a multi‑government and private‑sector body that coordinates AI policy, investment, and thematic groups across the continent. The council is designed to foster private‑sector execution within a supportive regulatory environment.
EVIDENCE
She details that 49 African countries signed a declaration in April 2025, the AI Council was launched in November 2025 with 15 members (seven ministers and eight private sector representatives), and that six thematic groups address computing, data, skills, regulation, market, and investment, emphasizing private-sector leadership [213-224][225-236].
MAJOR DISCUSSION POINT
Continental AI governance structure
AGREED WITH
Sunil Wadhwani, Shalini Kapoor, S. Krishnan, Shikoh Gitau
Argument 2
Private sector execution supported by conducive regulatory environment; philanthropy as de‑risking layer
EXPLANATION
Lacina argues that a favorable regulatory climate enables private‑sector AI execution, while philanthropy can act as a de‑risking mechanism to accelerate investment. She stresses that finance is secondary to having the right regulatory clouds.
EVIDENCE
She states that the government must create a conducive environment for private sector, that finance is the last consideration, likening financing to rain that requires clouds (regulatory environment) to fall, and that philanthropy should serve as a de-risking layer for projects where governments are hesitant to invest [231-240].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Balancing regulation with private-sector AI execution and the role of philanthropy as a de-risking mechanism are discussed in the private-sector innovation commentary and the private-capital investment overview [S23] [S25].
MAJOR DISCUSSION POINT
Enabling private sector and philanthropic support
Argument 3
Vision of a single African digital market and regulatory harmonisation
EXPLANATION
Lacina presents a vision of unifying Africa’s 1.4 billion people into a single digital market with harmonised regulations, drawing parallels to India’s digital public infrastructure successes. She sees regulatory harmonisation as a key obstacle to overcome.
EVIDENCE
She notes that Africa’s 1.4 billion population can be leveraged through collaboration, cites India’s digital ID and election management as examples, and emphasizes the need for regulatory harmonisation across the continent, referencing her own view of transforming Africa into a single digital market [202-207][402-406].
MAJOR DISCUSSION POINT
Continental digital integration
S
Shikoh Gitau
2 arguments152 words per minute394 words155 seconds
Argument 1
Need for political goodwill and reducing “collaboration tax” to enable partnership
EXPLANATION
Shikoh stresses that political goodwill is essential for AI collaboration and that the effort, resources, and coordination required—termed the ‘collaboration tax’—must be minimized to make partnerships feasible. She calls for mechanisms that lower these barriers.
EVIDENCE
She defines the ‘collaboration tax’ as the effort and resources needed to collaborate, argues that governments should handle this to reduce pain, and notes that while people, builders, researchers, and policymakers are present, political goodwill is needed to bring them together [266-273].
MAJOR DISCUSSION POINT
Reducing barriers to cross‑border AI work
Argument 2
Celebration of Global South unity and multi‑horse race narrative
EXPLANATION
Shikoh shares a personal moment of seeing a diverse audience of about 300 people, interpreting it as evidence that the Global South is collectively advancing AI, moving from a two‑horse race to a multi‑horse race. This reflects a sense of shared purpose.
EVIDENCE
She recounts standing at the Oberoi venue, seeing a diverse sea of roughly 300 faces, and feeling that the Global South can collectively drive AI forward, describing the shift from a two-horse race to a multiple-horse race [398-401].
MAJOR DISCUSSION POINT
Collective Global South momentum
Agreements
Agreement Points
Scaling AI solutions requires early and deep partnership with government and integration into existing public digital platforms.
Speakers: Sunil Wadhwani, S. Krishnan, Lacina Kone, Ankur Vora
Early government engagement, scalability planning, integration with Nikshay and Rakshak platforms India AI Mission model: subsidised compute, sovereign models, open‑source data for the Global South India’s digital ID, UPI, and other public infrastructure enable AI at scale Innovation‑to‑impact gap highlighted
All speakers stress that working with government from the outset and embedding AI tools into national platforms such as Nikshay, Rakshak, Aadhaar, and UPI is essential for rapid, large-scale impact, and that public support (subsidised compute, open models) underpins this scaling [37-41][118-124][111-114][305-319][131-134].
POLICY CONTEXT (KNOWLEDGE BASE)
Multistakeholder frameworks call for governments to create enabling environments and embed AI in existing digital public infrastructure, as highlighted in discussions on DPI and government-private collaboration [S42][S47][S64][S66].
South‑South collaboration should be a two‑way exchange of knowledge and playbooks rather than a one‑way technology transfer.
Speakers: Sunil Wadhwani, Shalini Kapoor, Lacina Kone, S. Krishnan, Shikoh Gitau
Mutual learning, not one‑way technology transfer, leveraging similar challenges AI diffusion as shared playbooks and cross‑country learning Smart Africa AI Council: multi‑nation coordination, private‑sector‑led ecosystem Democratizing AI through inclusive participation, youth engagement, and open access Need for political goodwill and reducing ‘collaboration tax’ to enable partnership
Speakers agree that countries in the Global South face similar AI challenges and should share solutions, playbooks, and regulatory lessons, fostering mutual learning and joint pathways such as the 100 pathways to 2030 initiative [363-370][179-190][198-224][289-298][266-273].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs stress bidirectional learning between governments and technical communities and note recent South-South statements on technology-transfer inclusion [S55][S56][S58].
Democratizing AI and ensuring inclusive participation are central to achieving impact.
Speakers: Sunil Wadhwani, S. Krishnan, Shalini Kapoor, Ankur Vora
Patience and resilience amid rapid AI pace and logistical challenges Democratizing AI through inclusive participation, youth engagement, and open access Emphasis on collaboration over competition as the summit’s key takeaway Innovation‑to‑impact gap highlighted
All speakers highlight that AI must be made accessible to frontline workers, youth, and broader society, with inclusive events and tools that lower barriers, thereby democratizing the technology [125-127][327-329][408-410][131-134].
POLICY CONTEXT (KNOWLEDGE BASE)
Inclusive AI governance is a core element of AI-for-democracy agendas and open-foundation initiatives, emphasizing participation beyond the Global North [S50][S52][S59].
Robust digital public infrastructure (e.g., Aadhaar, UPI, Nikshay, Rakshak) is the foundation for large‑scale AI deployment.
Speakers: Sunil Wadhwani, Lacina Kone, S. Krishnan
India’s digital ID, UPI, and other public infrastructure enable AI at scale India’s digital ID, UPI, and other public infrastructure enable AI at scale India AI Mission model: subsidised compute, sovereign models, open‑source data for the Global South
The panel repeatedly points to India’s existing digital public goods-Aadhaar, UPI, and sector-specific platforms like Nikshay and Rakshak-as critical enablers for AI solutions, and notes that the AI Mission extends this model with low-cost compute and open data for sharing globally [111-124][204-206][305-319].
POLICY CONTEXT (KNOWLEDGE BASE)
Digital public infrastructure is repeatedly cited as the backbone for scalable AI, with examples from India’s DPI and calls to treat AI as shared public infrastructure [S64][S65][S66][S47].
Similar Viewpoints
Both emphasize building capacity within government—through training civil servants, deploying dedicated teams, and engaging youth—to ensure AI tools are adopted and scaled effectively [145-151][327-329].
Speakers: Sunil Wadhwani, S. Krishnan
Capacity‑building for civil servants and large deployment teams to ensure rollout Democratizing AI through inclusive participation, youth engagement, and open access
Both stress that moving from prototype to impact requires deliberate partnership and collaborative spirit rather than isolated competition [131-134][408-410].
Speakers: Ankur Vora, Shalini Kapoor
Innovation‑to‑impact gap highlighted Emphasis on collaboration over competition as the summit’s key takeaway
Unexpected Consensus
Role of the private sector in scaling AI alongside government
Speakers: Lacina Kone, Sunil Wadhwani
Private sector execution supported by conducive regulatory environment; philanthropy as de‑risking layer Capacity‑building for civil servants and large deployment teams to ensure rollout
While Lacina foregrounds private-sector-led execution within a supportive regulatory cloud, Sunil, a primarily government-partnered actor, also highlights a sizable deployment team and the need for private-sector-friendly environments, revealing an unexpected alignment on the necessity of private sector involvement for scale [231-236][145-151].
POLICY CONTEXT (KNOWLEDGE BASE)
Guidelines highlight the private sector’s execution capacity when governments provide conducive frameworks, underscoring private contributions to SDG-aligned AI deployment [S42][S46][S47][S48].
Open‑source sovereign AI models as a shared global resource
Speakers: Lacina Kone, S. Krishnan
Vision of a single African digital market and regulatory harmonisation India AI Mission model: subsidised compute, sovereign models, open‑source data for the Global South
Lacina’s vision of a harmonised African digital market aligns unexpectedly with Krishnan’s description of India’s sovereign AI models and open data that can be shared globally, indicating converging views on open, reusable AI assets across continents [402-406][305-319].
POLICY CONTEXT (KNOWLEDGE BASE)
Emerging policy concepts promote open-foundation, sovereign-able AI models that can be customized locally while remaining globally shared [S59][S60].
Overall Assessment

The discussion shows strong convergence on four pillars: (1) government partnership and integration with public digital platforms; (2) South‑South mutual learning and shared playbooks; (3) democratization and inclusive participation; (4) reliance on robust digital public infrastructure. These shared positions span AI, ICT‑for‑development, capacity building, and enabling environments, indicating a cohesive vision for scaling AI impact across the Global South.

High consensus – most speakers reinforce each other’s points, creating a unified narrative that AI impact will be driven by public‑private collaboration, open infrastructure, and cross‑regional learning. This alignment suggests that future initiatives are likely to be coordinated, leveraging government platforms, shared resources, and South‑South partnerships to accelerate AI‑enabled development.

Differences
Different Viewpoints
Approach to scaling AI solutions – government‑led versus private‑sector‑led
Speakers: Sunil Wadhwani, Lacina Kone
Sunil: “the only way to scale is government” and stresses working with ministries from day one, making government accountable, and integrating AI into public platforms [92-94][118-124] Lacina: emphasizes that private-sector execution is primary, with government only creating a conducive regulatory environment; finance is deemed not the issue, likening it to rain needing clouds (regulation) [231-236]
Sunil argues that scaling AI impact requires direct government partnership and integration, while Lacina contends that scaling should be driven by the private sector once the regulatory environment is set, downplaying the role of government in execution and stating finance is not the main barrier. Their views diverge on who should lead and what factors are decisive for scaling.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates reference multistakeholder partnership models that balance government leadership with private-sector scaling capabilities [S47][S46].
What is the primary constraint for AI deployment – financing versus regulatory/environmental readiness
Speakers: Lacina Kone, Sunil Wadhwani
Lacina: claims that finance is the last consideration and that the regulatory ‘clouds’ are what enable investment, using the metaphor that financing is like rain needing clouds [233-236] Sunil: focuses on building a large deployment team, training civil servants, and developing data-governance standards, implying substantial resource and funding needs for rollout [145-151]
Lacina downplays financing as a barrier, emphasizing regulatory conditions, whereas Sunil highlights the need for significant human and financial resources to operationalise AI solutions, indicating a disagreement on which factor is most critical.
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses from the Global South identify compute and talent shortages as larger barriers than finance or regulation, while also noting environmental and regulatory challenges [S53][S54][S56].
Unexpected Differences
Finance portrayed as non‑issue
Speakers: Lacina Kone
Lacina: states that “finance is not the issue” and that the regulatory environment is the real prerequisite for investment [233-236]
While most participants discuss funding, capacity building, and deployment resources, Lacina’s outright dismissal of finance as a barrier is unexpected and not directly contested by other speakers, highlighting a divergent perception of financing importance.
POLICY CONTEXT (KNOWLEDGE BASE)
Some assessments argue that financing is not the main bottleneck for AI rollout, emphasizing technical constraints such as compute infrastructure instead [S53].
Introduction of a ‘collaboration tax’ concept
Speakers: Shikoh Gitau
Shikoh: defines the collaboration tax as the effort and resources required to collaborate, urging reduction of this burden for effective partnerships [266-273]
The notion of a collaboration tax is not addressed by any other participant, making it an unexpected point of contention regarding how cross‑border AI work should be facilitated.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions on taxing digital platforms and tech giants explore mechanisms that could be adapted as a ‘collaboration tax’ to fund joint AI initiatives [S62].
Overall Assessment

The discussion shows modest disagreement centered on the preferred scaling model (government versus private sector) and the perceived primary constraints (finance versus regulatory environment). Participants share a common goal of expanding AI impact in the Global South, but differ on the mechanisms and priorities to achieve it. Unexpected divergences arise around the role of financing and the concept of a collaboration tax.

Low to moderate – disagreements are largely about emphasis and strategy rather than fundamental opposition, suggesting that consensus can be built through coordinated policy design that balances government leadership, private‑sector dynamism, regulatory readiness, and adequate financing.

Partial Agreements
Both aim to foster South‑South collaboration on AI, but Sunil focuses on mutual technical learning, while Shikoh highlights political facilitation and lowering collaboration costs as the key to making such partnerships work [363-370][266-273]
Speakers: Sunil Wadhwani, Shikoh Gitau
Sunil: stresses mutual learning and two-way knowledge exchange between India and other Global South countries [363-370] Shikoh: emphasizes the need for political goodwill and reducing the ‘collaboration tax’ to enable cross-border partnerships [266-273]
Both seek widespread AI adoption across the Global South, yet Shalini proposes structured knowledge‑sharing mechanisms, whereas Krishnan stresses inclusive participation and experiential learning as the path to democratization [179-190][289-298]
Speakers: Shalini Kapoor, S. Krishnan
Shalini: describes AI diffusion as creating shared playbooks and digital rails that allow solutions to move between countries [179-190] Krishnan: frames democratizing AI as opening rooms and halls to youth and broader audiences, enabling people to see and engage with AI directly [289-298]
Takeaways
Key takeaways
AI can be leveraged for high‑impact health and education outcomes in the Global South, exemplified by TB detection via cough analysis and AI‑driven early‑reading tools in India. Successful scaling requires early and deep partnership with government, integration with existing digital public infrastructure (e.g., Nikshay, Rakshak, Aadhaar, UPI) and planning for deployment at scale from day one. India’s AI Mission model—subsidised compute, sovereign open‑source models, and shared data pipelines—offers a template that can be exported to other low‑ and middle‑income countries. South‑South collaboration is essential; the focus should be on sharing playbooks, mutual learning, and reducing the “collaboration tax” rather than one‑way technology transfer. A conducive regulatory environment, private‑sector execution, and philanthropic de‑risking together create an ecosystem that can move AI from prototype to production. Democratizing AI means inclusive participation (youth, civil servants, NGOs), open access to resources, and keeping humans at the centre of AI deployments. Collaboration over competition was repeatedly highlighted as the overarching spirit of the summit.
Resolutions and action items
Wadhwani Institute to launch operations in Rwanda, Ethiopia and Kenya and to continue expanding AI solutions globally. Commitment to impact 500 million people by 2040 through AI‑driven health and education platforms. India’s AI Mission to share its compute‑as‑a‑service model, sovereign AI models (AI Kosh), and open‑source datasets with other Global South nations. Smart Africa AI Council established to coordinate AI policy, investment, data, skills, and regulation across 49 African countries. Gates Foundation partnership to fund and support AI‑for‑impact initiatives in India and other Global South countries. Capacity‑building programmes for senior civil servants on AI governance, data standards and use‑case development to be replicated abroad. Development of the “100 Pathways to 2030” playbook to document successful AI deployment routes for reuse across regions.
Unresolved issues
Specific mechanisms for operationalising South‑South knowledge transfer and joint implementation (e.g., joint funding models, joint governance structures) remain undefined. How to quantitatively reduce the “collaboration tax” – the time, resources and coordination overhead – was discussed but no concrete framework was agreed upon. Financing models for large‑scale AI rollout beyond initial government subsidies were mentioned as a challenge, with no clear solution presented. Regulatory harmonisation across diverse legal and linguistic contexts (e.g., between India’s multilingual environment and African nations) needs further work. Ensuring sustained frontline adoption (health workers, teachers) beyond initial rollout – how to monitor and iterate on usability – was highlighted but not resolved.
Suggested compromises
Balancing private‑sector execution with public‑sector regulation: governments provide an enabling environment while private firms handle implementation, with philanthropy acting as a de‑risking layer. Integrating AI tools into existing government platforms (Nikshay, Rakshak) rather than building parallel systems, to minimise duplication and ease adoption. Designing AI solutions that both improve outcomes for end‑users and simplify the workflow of frontline workers, thereby aligning technical goals with user convenience.
Thought Provoking Comments
We realized we were not approaching it quite right. Having a nice technical solution is not enough; you need to work with government from day one, think about scale from the start, integrate with existing digital public infrastructure, and make sure the tool makes life easier for frontline workers.
This reframes the common tech‑first mindset by highlighting that impact depends on policy, scale‑thinking, and user‑centric design, not just algorithmic brilliance.
It shifted the conversation from describing AI products to discussing systemic factors for scaling. It prompted Ankur to reflect on the innovation‑to‑impact gap and opened the floor for the panel to explore government partnerships and South‑South knowledge transfer.
Speaker: Sunil Wadhwani
AI‑based cough detection for TB is now rolling out nationally in India, becoming the national standard and recognized by WHO as a potential game‑changer globally.
Provides a concrete, high‑impact example of AI moving from prototype to nationwide deployment, illustrating the earlier point about scaling through government channels.
Anchored the abstract discussion in a real‑world success story, giving other speakers (e.g., Shalini and Lacina) a tangible reference when talking about diffusion and replication in other countries.
Speaker: Sunil Wadhwani
The only way to scale is government. You have to approach senior civil servants with humility, plan deployment at scale before you even build the solution, and hold the government accountable alongside the technical team.
Emphasizes partnership dynamics and accountability, challenging the notion that private innovators can scale alone.
Led Ankur to acknowledge the non‑linear path from innovation to impact and set the stage for the panel’s focus on public‑private ecosystems and South‑South collaboration.
Speaker: Sunil Wadhwani
AI diffusion is about the routes and rails that need to be laid, just like digital rails were laid for DPI. Playbooks can be shared so that a solution built in Kenya can be used in India and vice‑versa.
Introduces the metaphor of diffusion infrastructure, moving the dialogue from isolated projects to a systematic, replicable framework.
Prompted Lacina to discuss the role of regulatory and financial ecosystems, and Shikoh to bring up the concept of a ‘collaboration tax,’ deepening the conversation about what enables diffusion.
Speaker: Shalini Kapoor
Finance is not the issue; the regulatory environment is the cloud that creates rain. Private sector needs a predictable, conducive environment to invest, and philanthropy should act as a de‑risking layer.
Reframes the common belief that lack of capital blocks AI adoption, instead highlighting policy and risk mitigation as the true bottlenecks.
Shifted the panel’s focus from funding to governance, influencing Shikoh’s point about making AI a political and economic issue and reinforcing Sunil’s earlier remarks about government’s role.
Speaker: Lacina Kone
We need to start talking about the ‘collaboration tax’ – the effort, resources, and coordination required to make cross‑country AI projects work, and how governments should shoulder that burden.
Coins a new term that captures the hidden costs of partnership, prompting a nuanced discussion about who should bear those costs and how to streamline collaboration.
Added a layer of complexity to the South‑South dialogue, leading participants to consider practical mechanisms for joint work rather than just high‑level aspirations.
Speaker: Shikoh Gitau
Democratizing AI means letting people into the rooms, making compute available at a third of global prices, building sovereign models, and sharing them openly with the Global South.
Articulates a comprehensive, frugal model for AI democratization that combines infrastructure, policy, and open‑source ethos, moving beyond isolated pilots.
Provided a macro‑level vision that resonated with all panelists, reinforcing Sunil’s points about government support and prompting the final reflections on how the summit itself embodied democratic AI.
Speaker: S. Krishnan
Overall Assessment

The discussion pivoted around a few core insights: the necessity of government partnership and scale‑by‑design, the power of concrete, nationally‑backed AI deployments, and the need for systematic diffusion pathways supported by regulatory and political frameworks. Sunil’s early remarks about moving beyond pure technology set the tone, which was expanded by Shalini’s diffusion metaphor, Lacina’s regulatory focus, and Shikoh’s ‘collaboration tax’ concept. Krishnan’s framing of democratic AI tied these strands together, positioning the summit itself as a model of inclusive, frugal AI. Collectively, these comments shifted the conversation from showcasing isolated innovations to constructing a replicable, cross‑regional ecosystem for AI impact in the Global South.

Follow-up Questions
What are the specific steps and challenges in adapting India’s AI solutions (e.g., TB cough detection, sputum analysis) for deployment in African countries like Rwanda, Ethiopia, and Kenya?
Sunil mentioned starting operations in these countries but did not detail the localization process, indicating a need for deeper insight into transferability and implementation hurdles.
Speaker: Sunil Wadhwani
How can the ‘collaboration tax’—the resources and effort required for cross‑country AI collaborations—be measured and reduced?
Shikoh introduced the concept of collaboration tax but did not explore metrics or mitigation strategies, highlighting an area for further study.
Speaker: Shikoh Gitau
What financing models best unlock private‑sector investment for AI deployment in the Global South, given that regulatory environment is the primary enabler?
Lacina argued finance is not the core issue and emphasized regulatory conditions, suggesting research into financing mechanisms linked to policy frameworks.
Speaker: Lacina Kone
How can India’s AI Kosh (AI treasury) model be evaluated and adapted for other nations seeking sovereign AI infrastructure?
Krishnan described the AI Kosh model and its willingness to share it, but did not provide evaluation criteria or adaptation pathways, indicating a research gap.
Speaker: S. Krishnan
What are the cost‑effective AI compute strategies that allow India to offer compute at one‑third of global prices, and how can these be replicated elsewhere?
Krishnan highlighted frugal compute as a key achievement but did not detail the technical or policy levers, warranting further investigation.
Speaker: S. Krishnan
How can the ‘100 Pathways to 2030’ framework be operationalized to guide AI pilots through to production across diverse contexts?
Shalini referenced the 100 Pathways initiative but did not outline implementation steps, indicating a need for concrete methodology.
Speaker: Shalini Kapoor
What are the best practices and challenges for integrating AI algorithms into existing government digital public‑infrastructure platforms (e.g., Nikshay, Rakshak)?
Sunil noted successful integrations but did not discuss technical, governance, or user‑adoption challenges, suggesting further research.
Speaker: Sunil Wadhwani
What metrics and impact‑evaluation methods should be used for large‑scale AI interventions in health (TB) and education (early reading) to inform replication in other countries?
While Sunil shared impact numbers, a systematic evaluation framework was not described, pointing to a research need.
Speaker: Sunil Wadhwani
What governance and data‑standard frameworks are being developed for AI use in Indian ministries, and how can these be adapted for other Global South governments?
Sunil mentioned capacity‑building on data governance but did not detail the standards, indicating an area for further study.
Speaker: Sunil Wadhwani
How can digital public infrastructure (e.g., Aadhaar, UPI) be leveraged or replicated to enable AI scaling in other countries?
Sunil highlighted India’s digital infrastructure as a scaling enabler but did not explore replication models for other contexts.
Speaker: Sunil Wadhwani
What mechanisms can align private‑sector, philanthropic, and government actors to de‑risk AI projects in the Global South?
Lacina discussed the role of philanthropy as a de‑risking layer but did not specify coordination mechanisms, suggesting further inquiry.
Speaker: Lacina Kone
How can AI tools be designed to genuinely reduce workload and improve acceptance among frontline health workers and teachers?
Sunil emphasized making tools easier for frontline users but did not provide design guidelines or evidence of impact, indicating a research opportunity.
Speaker: Sunil Wadhwani

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.